sw//mechanical-camden

Mechanical CAMden

Demo 1
Demo 2

Mechanical CAMden successfully sorting a red and green blocks.

Mechanical CAMden is a computer vision-driven robotic sorting system developed by myself, Satchel Schaivo (MechE '26), and Filip Kypriotis (MechE '27) for our Fundamentals of Robotics final project. Given a 5-DOF robotic arm, we used computer vision and inverse kinematics to autonomously identify, pick up, and sort colored blocks.

Technical Components:

Computer Vision: We calibrated a fisheye camera sitting on top of the end effector to capture the sorting area. Using OpenCV and ArUco marker boards, we calculated the translation of pixel coordinates through multiple spatial frames: image/pixel → camera → board → world → robot base. This provided us with the x, y, and z coordinates of the object in the frame of our robotic arm. Additionally, this allowed us to identify the scope of the ArUco board, such that we could hard-code specific colors to be sorted.

Inverse Kinematics (IK): Once localized, the arm uses numerical inverse kinematics and custom trajectory generation to move the end effector smoothly to the object, grasp it, and transport it to a designated bin based on its color. We use forward kinematics to double check the accuracy of our IK function.

Pipeline: The complete system was implemented in Python and built on top of the Hiwonder ARMPi Pro platform. For CV, we utilized functions from the OpenCV library for conversions between certain frames and to correctly read the ArUco. Running the function directly on the arm's RasPi, the system would autonomously detect blocks, achieving ~85% location/IK accuracy and ~98% classification accuracy.

Personal Contributions:

As I was the only non-mechanical engineer on the team, I completed the brunt of the computing. My individual contributions include:
  • The entire computer vision pipeline, including calibration, frame translations, and color testing.
  • Implementing our forward and inverse kinematic functions.
  • Completing/testing integration between CV, IK, and the HiWonderPro actuation.
Additionally, I completed the introduction, computer vision, integration, and results sections of the report (sections 0, 1, 3, and 4).