mllm 12
- [MiniGPT-4] Enhancing Vision-Language Understanding with Advanced Large Language Models
- Flamingo: a Visual Language Model for Few-Shot Learning
- BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models
- Improved Baselines with Visual Instruction Tuning (LLaVA-1.5)
- Chain-of-Visual-Thought: Teaching VLMs to See and Think Better with Continuous Visual Tokens
- Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought Reasoning
- Lecture 14: Reasoning
- Lecture 11: High-Resolution, High-Performing LVLMs
- Lecture 10: Large Vision Language Models (LVLMs)
- Why We Feel: Breaking Boundaries in Emotional Reasoning with Multimodal Large Language Models
- LLaVA: Large Language and Vision Assistant
- EmoVIT: Revolutionizing Emotion Insights with Visual Instruction Tuning