Computer Vision for Manufacturing Quality Control
How we deployed visual inspection systems that detect defects with 99.2% accuracy and process 200 parts per minute.
Manual quality inspection is slow, inconsistent, and expensive. A human inspector can reliably check 20-30 parts per minute with an accuracy of 80-90%. Our computer vision systems inspect 200 parts per minute at 99.2% accuracy — and they don't get tired, distracted, or take coffee breaks. This article shares how we design, train, and deploy visual inspection systems for manufacturing environments.
System Architecture
A manufacturing CV system has four components: image acquisition (cameras and lighting), image processing (preprocessing and augmentation), defect detection (the ML model), and action (reject, sort, alert). The model is the glamorous part, but image acquisition is where most projects succeed or fail. Poor lighting, wrong camera angle, or inconsistent positioning make even the best model useless.
- Camera selection: Industrial machine vision cameras (Basler, FLIR) with global shutter for moving parts. Consumer cameras introduce motion blur.
- Lighting is 80% of the battle: Diffuse backlighting for transparency defects, structured light for surface defects, dark-field illumination for scratches.
- Consistent positioning: Use mechanical fixtures or encoder-triggered capture to ensure every part is photographed at the same angle and distance.
- Edge inference: Run models on NVIDIA Jetson or Intel OpenVINO hardware at the production line, not in the cloud. Latency must be <50ms for real-time sorting.
Model Architecture and Training
For defect detection, we typically use two approaches: classification (good/bad) for simple pass/fail decisions, and object detection (YOLOv8 or EfficientDet) when you need to localize and categorize different defect types. For surface defect detection on textured materials, we've had excellent results with anomaly detection models that learn 'normal' appearance and flag deviations.
from ultralytics import YOLO
# Train a YOLOv8 model for defect detection
model = YOLO('yolov8m.pt') # Start from pretrained weights
results = model.train(
data='defect_dataset.yaml',
epochs=100,
imgsz=640,
batch=16,
device='cuda',
augment=True,
# Manufacturing-specific augmentations
hsv_h=0.01, # Minimal hue variation (lighting is controlled)
hsv_s=0.3,
flipud=0.5, # Parts can be upside down
mosaic=0.5,
mixup=0.1,
)
# Export for edge deployment
model.export(format='engine', device='cuda') # TensorRT for NVIDIA JetsonHandling the Data Challenge
The hardest part of manufacturing CV is data collection. Defective parts are rare — often less than 1% of production. You need hundreds of defect examples to train a robust model, but you might only see a few defects per day. We address this with synthetic data augmentation (applying artificial defects to good-part images), active learning (the model flags uncertain cases for human review), and defect simulation (intentionally creating defective parts for training).
In manufacturing, false negatives (missing a defect) are far more costly than false positives (flagging a good part). We tune our models to maximize recall at the expense of some precision — it's better to over-inspect than to ship defective products.
Computer vision is transforming manufacturing quality control from a bottleneck into a competitive advantage. The technology is mature, the hardware is affordable, and the ROI is typically under 6 months. If your production line still relies on manual inspection, this is the highest-impact AI investment you can make.
David Kim
Embedded Systems Lead