Duration: 4 Weeks (5 Days per Week)
Delivery Mode: Hybrid (Theory + Hands-on + Self-Study)
Daily Commitment: 2 Hours
Total Duration: 30 Days
Fees: 34999/-
The following tools and platforms will be used throughout the course for development, experimentation, and deployment:
Programming Languages & Libraries
• Python 3.10+
• NumPy, Pandas, Matplotlib, Seaborn
• Scikit-learn • PyTorch / TensorFlow (choose based on preference)
• NLTK, spaCy
• Transformers (HuggingFace)
• OpenAI API
• LangChain, FAISS / ChromaDB (for RAG & Vector DB) Development Environments
• Google Colab (for cloud-based GPU experiments)
• Jupyter Notebooks (for local development)
• VS Code / PyCharm (for advanced editing and debugging) Deployment & Visualization
• Streamlit (for model UI and API presentation) • Flask / FastAPI (for backend deployment)
• Git (version control & collaboration)
• GitHub / GitLab (code hosting & project management) Model Tracking & Experimentation
• Weights & Biases / MLflow (for model versioning and performance tracking) Explainability & Ethics
• SHAP, LIME, Captum (for model interpretability)
Prerequisites (Before Starting the Course) Learners should have:
Good Python programming skills
• Functions, classes, list comprehensions, file handling
• Basic knowledge of Linear Algebra & Calculus
◦ Matrices, vectors, gradients, derivatives
• Basic Statistics & Probability
◦ Mean, variance, standard deviation, conditional probability
• Machine Learning Foundations
◦ Supervised & Unsupervised learning
◦ Familiarity with scikit-learn
• Basic understanding of Neural Networks Forward/backward propagation, loss functions
• Machine Learning and Data Analytics
Week 1: Deep Learning Fundamentals (12 Hours)
Day 1: Recap of Neural Networks, Activation Functions
Day 2: Optimization Algorithms – SGD, Adam, RMSprop
Day 3: Regularization – Dropout, L1/L2, Batch Normalization
Day 4: Model Evaluation – Precision, Recall, F1, AUC
Day 5: Convolutional Neural Networks – Convolution, Pooling, Architectures (VGG, ResNet)
Day 6: Hands-on CNN Implementation using PyTorch/TensorFlow
Week 2: Natural Language Processing (NLP) (12 Hours)
Day 7: NLP Basics – Tokenization, Word Embeddings
Day 8: Word Embedding Techniques – Word2Vec, GloVe, FastText
Day 9: Sequence Models – RNN, LSTM, GRU
Day 10: Attention Mechanism & Seq2Seq Models
Day 11: Transformers – Self-Attention, Encoder-Decoder Architecture
Day 12: Introduction to BERT & Large Language Models (LLMs)
Week 3: Generative AI & Foundation Models (12 Hours)
Day 13: GANs – Architecture, Types (DCGAN, CycleGAN)
Day 14: Diffusion Models – Fundamentals and Applications
Day 15: Hands-on GANs – Image Generation
Day 16: HuggingFace Transformers – Using Pretrained Models
Day 17: Fine-tuning BERT, GPT for NLP Tasks
Day 18: Prompt Engineering and Inference Techniques
Week 4: RL, Explainability, Projects & Tools (12 Hours)
Day 19: Reinforcement Learning Basics – MDPs, Q-Learning
Day 20: Deep Q-Networks (DQN), Policy Gradients
Day 21: Multi-agent RL & Real-World Use Cases
Day 22: Explainable AI – SHAP, LIME, Captum
Day 23: Ethical AI – Bias, Fairness, Responsible AI
Day 24: AI Deployment – APIs, Streamlit
Week 5: Projects (12 Hours)
Day 25: Project 1 – Image Classification with Transfer Learning
Day 26: Project 2 – Text Classification using BERT
Day 27: Project 3 – Chatbot using Transformers
Day 28: LangChain & RAG (Retrieval Augmented Generation)
Day 29: AI Tools – OpenAI API, LLM Agents, Vector DBs
Day 30: Git, Model Versioning, Collaboration Tools
Certification
• Issued by KITS TECH LEARNING CENTER
• Other certifications are optional and chargeable.
We Would like to hear from you. Fill Free for any Query