The Second International Conference on Generative Pre-trained Transformer Models and Beyond

GPTMB 2025

July 06, 2025 to July 10, 2025 - Venice, Italy

Deadlines

Submission

Mar 18, 2025

Notification

May 04, 2025

Registration

May 18, 2025

Camera ready

Jun 01, 2025

Deadlines differ for special tracks. Please consult the conference home page for special tracks Call for Papers (if any).

Publication

Published by IARIA Press (operated by Xpert Publishing Services)

Archived in the Open Access IARIA ThinkMind Digital Library

Prints available at Curran Associates, Inc.

Authors of selected papers will be invited to submit extended versions to a IARIA Journal

Indexing Procedure

Affiliated Journals

GPTMB 2025 - The Second International Conference on Generative Pre-trained Transformer Models and Beyond

July 06, 2025 - July 10, 2025

GPTMB 2025
Onsite and Online Options: In order to accommodate various situations, we are offering the option for either physical presence or virtual participation (pdf slides or pre-recorded videos).

ISSN:
ISBN: 978-1-68558-287-6

GPTMB 2025 is colocated with the following events as part of DigiTech 2025 Congress:

  • DIGITAL 2025, Advances on Societal Digital Transformation
  • IoTAI 2025, The Second International Conference on IoT-AI
  • GPTMB 2025, The Second International Conference on Generative Pre-trained Transformer Models and Beyond
  • AIMEDIA 2025, The First International Conference on AI-based Media Innovation

GPTMB 2025 Steering Committee

Petre Dini
IARIA
USA/EU


Isaac Caicedo-Castro
University of Córdoba
Colombia


Tzung-Pei Hong
National University of Kaohsiung
Taiwan


Stephan Böhm
RheinMain University of Applied Sciences - Wiesbaden
Germany


Alper Yaman
Fraunhofer IPA
Germany


Joni Salminen
University of Vaasa
Finland


Zhixiong Chen
Mercy College
USA


Christelle Scharff
Pace University
USA


 

GPTMB 2025 conference tracks:

Generative-AI basics
Generative pre-trained transformer (GPT) models
Transformer-based models and LLMs (Large Language Models)
Combination of GPT models and Reinforcement learning models
Creativity and originality in GPT-based tools
Taxonomy of context-based LLM training
Deep learning and LLMs
Retrieval augmented generation (RAG) and fine-tunning LLMs
LLM and Reinforcement Learning from Human Feedback (RLHF)
LLMs (autoregressive, retrieval-augmented, autoencoding, reinforcement learning, etc.)
Computational resources forLLM raining and for LLM-based applications

LLMs
Large Language Models (LLM) taxonomy
Model characteristics (architecture, size, training data and duration)
Building, training, and fine tuning LLMs
Performance (accuracy, latency, scalability)
Capabilities (content generation, translation, interactive)
Domain (medical, legal, financial, education, etc.)
Ethics and safeness (bias, fairness, filter, explainability)
Legal (data privacy, data exfiltration, copyright, licensing)
Challenges (integrations, mismatching, overfitting, underfitting, hallucinations, interpretability, bias mitigation, ethics)

LLM-based tools and applications
Challenging requirements on basic actions and core principles
Methods for optimized selection of model size and complexity
Fine-tuning and personalization mechanisms
Human interactions and actions alignment
Multimodal input/output capabilities (text with visual, audio, and other data types)
Adaptive learning or continuous learning (training optimization, context-awareness)
Range of languages and dialects, including regional expansion
Scalability, understandability, and explainability
Tools for software development, planning, workflows, coding, etc.
Applications on robotics, autonomous systems, and moving targets
Cross-interdisciplinary applications (finance, healthcare, technology, etc.)
Discovery and advanced scientific research applications
Computational requirements and energy consumption
Efficient techniques (quantization, pruning, etc.)
Reliability and security of LLM-based applications
Co-creation, open source, and global accessibility
Ethical considerations (bias mitigation, fairness, responsibility)

Small-language models and tiny-language models
Architecture and design principles specific to small language models
Tiny language models for smartphones, IoT devices, edge devices, and embedded systems
Tools for small languages models (DistilBERT, TinyBERT, MiniLM, etc.)
Knowledge distillation, quantization, low latency, resource optimization
Energy efficiency for FPGAs and specialized ASICs for model deployment
Tiny language models for real-time translation apps and mobile-based chatbots
Tiny languages and federated learning for privacy
Small language models for vision for multimodal applications
Hardware considerations (energy, quantization, pruning, etc.)
Tiny language models and hardware accelerators (GPUs, TPUs, and ML-custom ASICs)

Critical Issues on Input Data
Datasets: accuracy, granularity, precision, false/true negative/positive
Visible vs invisible (private, personalized) data
Data extrapolation
Output biases and biased Datasets
Sensitivity and specificity of Datasets
Fake and incorrect information
Volatile data
Time sensitive data
Critical Issues on Processing
Process truthfulness
Understability, Interpretability, and Explainability
Detect biases and incorrectness
Incorporate the interactive feedback
Incorporate corrections
Retrieval augmented generation (RAG) for LLM input
RLHF for LLM fine-tuning output

Output quality
Output biases and biased Datasets
Sensitivity and specificity of Datasets
Context-aware output
Fine/Coarse text summarization
Quality of Data pre-evaluation (obsolete, incomplete, fake, noisy, etc.)
Validation of output
Detect and expalin hallucinations
Detect biased and incorrect summarization before spreading it

Education and academic liability issues
Curricula revision for enbedding AI-based tools and methodolgies
User awareness on output trust-ability
Copyright infringements rules
Plagiarism and self-plagiarism tools
Ownership infringement
Mechanisms for reference verification
Dealing with hidden self-references

Regulations and limitations
Regulations (licensing, testing, compliance-threshold, decentralized/centralize innovations)
Mitigate societal risks of GPT models
Capturing emotion and sentience
Lack of personalized (individual) memory and memories (past facts)
Lack of instant personalized thinking (personalized summarization)
Risk of GPTM-based decisions
AI awareness
AI-induced deskilling

Case studies with analysis and testing AI applications
Lesson learned with existing tools (ChatGPT, Bard AI, ChatSonic, etc.)
Predictive analytics in healthcare
Medical Diagnostics
Medical Imaging
Pharmacology
AI-based therapy
AI-based finance
AI-based planning
AI-based decision
AI-based systems control
AI-based education
AI-based cyber security


Deadlines:

Submission

Mar 18, 2025

Notification

May 04, 2025

Registration

May 18, 2025

Camera ready

Jun 01, 2025

Deadlines differ for special tracks. Please consult the conference home page for special tracks Call for Papers (if any).

Technical Co-Sponsors and Logistic Supporters