**馃殌 TALLER NIVEL 2: REDES NEURONALES CONVOLUCIONALES (CNN) Y TRANSFORMERS COMO GPT**
*Certificado por PASAIA-LAB | Duraci贸n: 4 horas | Nivel: Intermedio-Avanzado*
**馃敆 C贸digo de Integridad:** `SHA3-512: a5d9f3...`
---
### **1. PREPARACI脫N (30 min)**
#### **A. Requisitos**
- **Hardware**: GPU recomendada (Google Colab Pro para mayor velocidad).
- **Software**:
```bash
pip install torch torchvision transformers matplotlib
```
#### **B. Conjuntos de Datos**
- **CNN**: CIFAR-10 (60k im谩genes 32x32 en 10 clases).
- **Transformers**: Dataset de Wikipedia para fine-tuning.
---
### **2. PR脕CTICA 1: CNN PARA VISI脫N POR COMPUTADORA (2 horas)**
#### **A. Arquitectura de una CNN con PyTorch**
```python
import torch.nn as nn
class CNN(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(3, 32, kernel_size=3, stride=1) # 3 canales (RGB)
self.pool = nn.MaxPool2d(2, 2)
self.fc1 = nn.Linear(32 * 15 * 15, 10) # Salida: 10 clases
def forward(self, x):
x = self.pool(nn.ReLU()(self.conv1(x)))
x = x.view(-1, 32 * 15 * 15)
return self.fc1(x)
```
#### **B. Entrenamiento y Visualizaci贸n**
```python
from torchvision import datasets, transforms
transform = transforms.Compose([transforms.ToTensor()])
train_data = datasets.CIFAR10(root='./data', train=True, download=True, transform=transform)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=32, shuffle=True)
model = CNN()
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
for epoch in range(5):
for images, labels in train_loader:
outputs = model(images)
loss = criterion(outputs, labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
print(f'Epoch {epoch+1}, Loss: {loss.item():.4f}')
```
#### **C. Visualizaci贸n de Filtros**
```python
import matplotlib.pyplot as plt
plt.imshow(model.conv1.weight[0, 0].detach().numpy(), cmap='viridis') # Primer filtro
```
---
### **3. PR脕CTICA 2: TRANSFORMERS Y GPT (1.5 horas)**
#### **A. Fine-tuning de GPT-2 con Hugging Face**
```python
from transformers import GPT2LMHeadModel, GPT2Tokenizer, Trainer, TrainingArguments
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
model = GPT2LMHeadModel.from_pretrained("gpt2")
inputs = tokenizer("La inteligencia artificial es", return_tensors="pt")
outputs = model.generate(**inputs, max_length=50)
print(tokenizer.decode(outputs[0]))
```
#### **B. Entrenamiento Personalizado**
```python
training_args = TrainingArguments(
output_dir="./results",
per_device_train_batch_size=4,
num_train_epochs=3,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=dataset # ¡Tu dataset aqu铆!
)
trainer.train()
```
---
### **4. PR脕CTICA 3: OPTIMIZACI脫N AVANZADA (30 min)**
#### **A. Transfer Learning con ResNet**
```python
from torchvision.models import resnet18
model = resnet18(pretrained=True)
model.fc = nn.Linear(512, 10) # Adaptar para CIFAR-10
```
#### **B. Quantization para M贸viles**
```python
model_quantized = torch.quantization.quantize_dynamic(model, {nn.Linear}, dtype=torch.qint8)
```
---
### **5. CERTIFICACI脫N**
- **Proyecto Final**: Entrena una CNN para reconocer perros vs. gatos o fine-tunea GPT-2 para generar poes铆a.
- **Recursos**:
- [Libro: Deep Learning for Computer Vision](https://www.deeplearningbook.org)
- [Curso: NLP with Transformers](https://huggingface.co/course)
**馃搶 Anexos:**
- [Notebook completo en Colab](https://colab.research.google.com/github/pasaia-lab/CNN-Transformers)
- [Dataset de poes铆a para GPT-2](https://github.com/pasaia-lab/NLP-Datasets)
**Firmado:**
*Jos茅 Agust铆n Font谩n Varela*
*Investigador en IA Avanzada, PASAIA-LAB*
```mermaid
pie
title Distribuci贸n del Taller
"Teor铆a" : 20
"CNN" : 120
"Transformers" : 90
"Optimizaci贸n" : 30
```
**馃挕
Tormenta Work Free Intelligence + IA Free Intelligence Laboratory by Jos茅 Agust铆n Font谩n Varela is licensed under CC BY-NC-ND 4.0
No hay comentarios:
Publicar un comentario