Experimenting with the LeRobot S101

Some good, some not so good, and clarifications on the documentation

Summary:

About a month ago I got a Lerobot kit from Wowrobo. I’ve assembled the leader and follower arms, teleoperated the units and also collected data and trained a model to perform tasks. So far the documentation has been easy to follow. However, there have been some hiccups and they are documented here:

Prerequisites

HF Lerobot documentation: https://huggingface.co/docs/lerobot/so101

Wowrobo kit – https://shop.wowrobo.com/products/so-arm101-diy-kit-assembled-version-1?variant=46588641607897

Issue #1Power Supplies

Maybe it’s different for different motors ( mine was Feetech ), but the leader and follower arms have different power supplies and they make a big difference. The Leader has a 30W 5V, while the follower has a 36W 12V power supply. This tripped me up several times when the board would lose motor ids in a middle of a run.

Issue #2Training

So you’ve collected your teleop data and are ready to train. There are several options

  1. Locally – I would not recommend training a model locally as this is VERY slow and I have an M3 Macbook Pro – and it may be days before you get to 10K steps
  2. Google Colab – this is an alternative for those who are GPU Poor and the option I ultimately used. The HF instructions have a page that walks you through setting it up here. However, the free tier just gives you a T4, which works if you set batch = 1 and then you’ll have to hope you won’t run out of memory. If you want the beefier A100, which will train 100k steps in about 5 hours, you’ll either have to upgrade to Colab Pro or pay as you go. The PAYG option works if you’re doing a one time deal, but you’ll have to babysit the notebook otherwise it’ll disconnect ( I observed this happens every 90 minutes ) and then you’ll have to start over, unless you mounted the output to your own Google Drive ( see below ). The Colab Pro method is supposed to cause less disconnects, but your mileage may vary. As of today, $10 ( 100 credits ) will train your model with some credits left over.
  3. GPU Providers. There’s plenty to choose from, but then you’ll have to do your own setup.

Mounting your google drive to your Colab notebook. Do this in Colab to save your checkpoints so you can resume if you’re disconnected.

from google.colab import drive
drive.mount('/content/drive')

Issue #3Missing model.safetensors file

When it came time to exercise my newly trained model, I discovered the model.safetensors file wasn’t around. I’m still not sure what happened, but make sure as checkpoints are written to check for this file otherwise all that training is for naught.

I used the following command line, which is different from the one published in HF.

!python lerobot/src/lerobot/scripts/lerobot_train.py --dataset.repo_id=dpang/record-test --policy.type=act --output_dir=drive/MyDrive/outputs/train/act_so101_test --job_name=lr_20251211_0949 --policy.device=cuda --wandb.enable=true --policy.push_to_hub=true --policy.repo_id=dpang/my_policy --save_freq=1000 --batch_size=2

Issue #4

Running inference

In order to properly exercise the model, make sure to uncomment/add the telelop arguments to command line provided by the HF instructions, otherwise you can’t reset the scene. I’m not sure why it’s commented out in the example, you really need it between episodes.

lerobot-record  --robot.type=so101_follower  --robot.port=/dev/tty.usbmodem5AB01812601   --robot.cameras="{front: {type: opencv, index_or_path: 0, width: 1920, height: 1080, fps: 30}, top: {type: opencv, index_or_path: 1, width: 1920, height: 1080, fps: 30}}"  --robot.id=le_follower_arm  --display_data=false  --dataset.repo_id=dpang/eval_test  --dataset.single_task="Push cup forwrd"  --policy.path=/Users/dpang/dev/lerebotHackathon20250615/lerobot/outputs_push_cup/train/push_cup_test/checkpoints/100000/pretrained_model --teleop.type=so101_leader --teleop.port=/dev/tty.usbmodem5AB01788091 --teleop.id=le_leader_arm

Issue #5

Colab line to train pi05

So inference with the ‘act’ model went smoothly, but when it came time to try the ‘pi05’ model, things didn’t work as expected. The documentation here for the colab to the train the model didn’t work. I used the line below instead.

!python lerobot/src/lerobot/scripts/lerobot_train.py \
--dataset.repo_id=dpang/record-test \
--policy.type=pi05 \
--batch_size=4 \
--steps=20000 \
--output_dir=drive/MyDrive/outputs/train/my_pi0_5 \
--job_name=my_pi0_5_training_20260116 \
--policy.device=cuda \
--wandb.enable=true \
--policy.repo_id=dpang/my_policy

In addition, I got error messages requiring authorization for the “paligemma-3b-pt-224” model. There is a link provided to get said authorization, but then you’ll have to restart the notebook. Also, make sure to log into HF otherwise you’ll error out trying to write the model to HF.

!huggingface-cli login

Here is a link to a successful run.