2022 Week 1 CSA Notes

As of this writing:

  • The Driver Station must show version 22.0 or later in the title bar.
    The latest DS is part of the FRC Game Tools that can be downloaded from National Instruments.
  • RoboRIO must be imaged to 2022_v4.0 or later.
    You can see the image version inside the Driver Station on the Diagnostics Tab.
    The newest image and Imaging software are also in the National Instruments Game Tools.
  • The latest version of WPILib and Visual Studio Code for Java and C++ developers is 2022.4.1. The latest version can always be downloaded from the WPILib Github Release page. This release works with the latest RoboRIO image version, so you’ll need to update everything.
  • The GradleRIO version will be 2022.4.1 in your build.gradle file. The Visual Studio Code plugin should automatically offer to update your GradleRIO version.
  • Teams should go through all their components to make sure the firmware is up to date.
    RevRobotics components should be updated with the REV Hardware Client.
    CTRE components should be updated with the Phoenix Tuner.
  • The 2022 Inspection Checklist is available online.

If you are headed to a competition, please update all your software before the event.

Notes from Duluth

The 2022 Northern Lights Regional and Lake Superior Regional were both held at the Duluth DECC. Pit areas for both were in the same large room, which made it easier for CSAs to help out teams from both regionals.

  • As usual, the most common problem was with teams whose RIO image, Driver Station, or GradleRIO were not up to date. Usually updating the RIO image would then require that WPILib and VS Code would also need an update.
  • Imaging the new RIO 2s typically required that the SD chip be popped out and then imaged with Etcher. After the SD chip was written, the imaging tool must be used to set the team number.
  • I saw fewer issues with cameras than in past events. There were a few calls for LImeLight questions, which were handled by CSAs that are LimeLight experts.
  • There were a couple problems caused by metal shaving shorting out PWM pins.

Unit Testing Commands

The WPILib command framework divides your robot program into two types of classes: subsystems and commands.  The subsystem classes represent the major physical parts of the robot, such as a shooter subsystem, a drive-train subsystem, or a manipulator arm subsystem.  The command classes define the actions taken by the subsystems, such as shooting a ball, moving the drive-train, or raising the manipulator arm.


Most of your programming time will go into creating, refining and debugging new commands.  Commands will be the most sophisticated part of your code.  Therefore they also have the greatest risk of going wrong.  Therefore you should spend a lot of time testing your commands.

So far we have tested simple functions and verified the primitive functionality in subsystems.  The next step is to created automated tests for your commands.

Testing a simple Command

Our simple example robot contains a Shooter subsystem that shoots balls.  The ShooterSubsystem has a high-speed wheel for throwing the ball, and a servo arm that can raise the ball up until it touches the wheel.  We will need a command to set the wheel speed, and another to control the servo arm.

A simple Command

Here is the command to raise or lower the servo arm:

package frc.robot.commands;

import edu.wpi.first.wpilibj.experimental.command.*;
import frc.robot.subsystems.*;

public class ShooterServoArmCommand extends SendableCommandBase {

  private final boolean fire;
  private final ShooterSubsystem shooter;

  public ShooterServoArmCommand(boolean fireArm, ShooterSubsystem shooterSubsystem) {
    fire = fireArm;
    shooter = shooterSubsystem;

  public void execute() {
    if (fire) {
    } else {

  public boolean isFinished() {
    return true;

Take note of the two parameters on the constructor:  fireArm and shooterSubsystem.   This command can either raise the arm or lower it, depending on whether the fireArm parameter is true or false.

By specifying the shooterSubsytem in the constructor we are using Dependency Injection, which makes the code more reusable and more testable.  When testing, we can replace the real subsystems with mock objects that fake the subsystem’s functionality.

A simple Test

Our task does two different things: retract and fire. First let’s test that firing the ball works:

package frc.robot.commands;

import edu.wpi.first.wpilibj.experimental.command.*;
import frc.robot.subsystems.*;
import org.junit.*;

import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.verify;

public class ShooterServoArmCommandTest {

    private CommandScheduler scheduler = null;

    public void setup() {
        scheduler = CommandScheduler.getInstance();

    public void testFireArm() {
        // Arrange
        ShooterSubsystem shooter = mock(ShooterSubsystem.class);
        ShooterServoArmCommand fireCommand 
                = new ShooterServoArmCommand(true, shooter);

        // Act

        // Assert

The test follows our Arrange / Act / Assert pattern:

  • We create a mock version of our ShooterSubsystem.  If we wanted, we could also define some mock behaviors at this point.
    We create the actual command we will test.  In this case we set the fireArm parameter to true, indicating that we want to fire the ball.
  • In the command framework, we never explicitly execute the command methods.  Instead, we “put it on the schedule”.   After this, the command scheduler will run the methods appropriately.  On a real robot, the scheduler tries to run all scheduled commands every 20 milliseconds.
    In this case we know that  our command will only run once before it’s done.
  • At the end of the test, we ask the mock framework to verify that the shooter’s “fire” command was called exactly once.

Unit tests will all execute whenever we build the code.  Go ahead and execute the “Build Robot Code” action within Visual Studio code.  Next write a similar test to verify that the command also correctly retracts the servo arm:

public void testRetractArm() {
    // Arrange
    ShooterSubsystem shooter = mock(ShooterSubsystem.class);
    ShooterServoArmCommand retractCommand = new ShooterServoArmCommand(false, shooter);

    // Act

    // Assert

Testing a Command Group

Simple commands can be grouped together to run sequentially or in parallel as more complicated commands.

A more complex Command

For instance, actually shooting a ball is a sequence of steps:

package frc.robot.commands;

import edu.wpi.first.wpilibj.experimental.command.*;
import frc.robot.subsystems.*;

public class AutoShootCommand extends SequentialCommandGroup {
    public AutoShootCommand(ShooterSubsystem shooter) {
                new PrintCommand("BEGIN: AutoShootCommand"),
                new ShooterServoArmCommand(false, shooter),
                new ShooterSetSpeedCommand(1.0, shooter),
                new WaitCommand(0.5),
                new ShooterServoArmCommand(true, shooter),
                new WaitCommand(0.5),
                new ShooterSetSpeedCommand(0.0, shooter),
                new ShooterServoArmCommand(false, shooter),
                new PrintCommand("END: AutoShootCommand")

Note that we are again using dependency injection, but that the same ShooterSubsystem will be used in all the internal commands.

Besides the shooter commands, we’ve also thrown in a couple of PrintCommands.  These commands print out to the console at the beginning and end of the command.  They also print to the Log File Viewer to be reviewed after a match.

Also we’ve thrown in a couple of WaitCommands, which give the shooter wheel half a second to spin up before shooting and then maintain speed while the ball is firing.

Testing a Command Group

A command group test follows the same pattern as simpler tests:

package frc.robot.commands;

import static org.junit.Assert.*;
import static org.mockito.Mockito.*;

import org.junit.*;

import edu.wpi.first.wpilibj.experimental.command.CommandScheduler;
import frc.robot.subsystems.ShooterSubsystem;

public class AutoShootCommandTest {

    private CommandScheduler scheduler = null;

    public void setup() {
        scheduler = CommandScheduler.getInstance();

    public void testShoot() throws InterruptedException {
        // Arrange
        ShooterSubsystem shooter = mock(ShooterSubsystem.class);
        AutoShootCommand command = new AutoShootCommand(shooter);

        // Act
        for (int i=0; i<100; i++) {

        // Assert
        verify(shooter, times(2)).retract();
        verify(shooter, times(1)).fire();

This command takes many run cycles, so run it many times, pausing 20 milliseconds between each execution.

After executing everything in the command group, we verify that the subsystem experienced all the actions for shooting.

Writing quality tests

It’s important to remember why we do unit testing.: we create suites of automated tests to improve the quality of our software.  Writing quality tests is a big subject and these last three articles have covered a lot of ground.  It would be easy to be overwhelmed, or in fact dubious, with all of this.  So keep your eye on the end goal:  software quality.

In a sense, writing methodical tests is a stepping stone from just programming into Software Engineering. Engineering means using systematic and disciplined practices when creating things.  Your tests will verify and quantify your software quality, in way that others can read and evaluate.

Further Reading


Unit Testing Subsystems

Testing is an element of any software development, and certainly it’s a big part of robot programming.  You’ve probably already done a lot of robot testing; deploy your code and test the robot.  Hopefully you’re already familiar with the idea of unit testing of small functions, but we can also automate the testing of whole subsystems.

Unit testing with WPILib

To demonstrate automated testing of robot subsystems, we’ll use a simplified robot program.  This program runs on a real robot build for the 2016 game, FIRST Stronghold.

A simple subsystem

In the WPILib command pattern a subsystem class represents a physical subset of the robot.  A subsystem contains physical components, such as motors and sensors.  There will be actions to perform on the subsystem, such as to drive or shoot.  For this example, we have a simple robot with two subsystems representing the robot chassis with its drive motors, and a shooter for throwing balls.


Mostly we’re going to work on testing the ShooterSubsystem.  The shooter has two components: a motor attached to a spinner wheel  and an arm attached to a servo that manipulates the ball.  To shoot a ball we will:

  1. Retract the servo arm so we can pick up a ball.
  2. Start the shooter wheel spinning.
  3. Extend the servo arm so the ball is pushed into the wheel.  The ball will go flying.
  4. Reset the system.  The wheel will be stopped and the servo retracted.

(Shooter Picture)

Here’s the code for the shooter subsystem:

package frc.robot.subsystems;

import static frc.robot.Constants.*;
import edu.wpi.first.wpilibj.Servo;
import edu.wpi.first.wpilibj.SpeedController;
import edu.wpi.first.wpilibj.experimental.command.*;

public class ShooterSubsystem extends SendableSubsystemBase {

    protected final SpeedController shooterMotor;
    protected final Servo shooterServo;
    protected boolean servoRetracted = true;

    public ShooterSubsystem(SpeedController motor, Servo servo) {
        shooterMotor = motor;
        shooterServo = servo;

    public void setSpeed(double speed) {

    public void retract() {
        servoRetracted = true;

    public void fire() {
        servoRetracted = false;

    public void reset() {

Note that the constructor takes two parameters as inputs: motor and servo. The motor and servo objects will be created elsewhere and then injected when the subsystem is constructed.

Mock testing with WPILib

The best way to do testing is with the full robot; load your code and go through a methodical test process.  Too often however, we don’t have sufficient access to the robot.  Maybe it hasn’t been built at all, or maybe it is shared with our teammates.  How can we test the code without access to the robot?  The answer is that we can test much of the logic with “mock” components.  Mocks are software objects that stand in for the real classes.  Instead of real motors, servos, and sensors, we’ll use mock motors, mock servos, and mock sensors.

We will use the Mockito framework to create mock SpeedControllers and mock Servos.   Mockito is a professional package for creating Java mocks, defining the mock behavior and checking the results.

To use Mockito, you’ll need to make two simple changes to your build.gradle file.

    1. Change the value of the includeDesktopSupport variable to true.
    2. Add the following line into the dependencies section: testCompile"org.mockito:mockito-core:2.+" .


A simple unit test

Add a “test” directory under “src” for your java unit tests.  Right-click on “src”, select “New Folder” and enter “test/java/frc/robot/subsystems”.  Right-click on “subsystems” and select “create an empty class” named “ShooterSubsystemTest.java”


Now we can create a test of the subsystem’s constructor:

package frc.robot.subsystems;

import static org.junit.Assert.*;
import static org.mockito.Mockito.*;
import edu.wpi.first.wpilibj.*;
import org.junit.*;

public class ShooterSubsystemTest {

    public void testConstructor() {
       // Arrange
        SpeedController motor = mock(SpeedController.class);
        Servo servo = mock(Servo.class);

        // Act
        ShooterSubsystem shooter = new ShooterSubsystem(motor, servo);

        // Assert
        assertEquals(true, shooter.servoRetracted);


In this test we first create mock objects for the motor and the servo.  The action we are testing is just to create the shooter object.  After performing the action, we verify that the servo is retracted.

Note that the test is broken into sections.  The Arrange / Act / Assert breakdown is a common pattern for designing tests.  Sometimes we’ll add some extra sections, but most tests will have the basic three parts.

You could argue that this test is a little superficial, and you’d be right. However, this test does serve a purpose. If at some later date someone changed the subsystem so it didn’t initially retract the server, then this test would fail.   We would then need to decide whether the code or the test has become incorrect.

Another unit test

Next let’s write a test for the setSpeed method.  This method sets the speed of the motor.  After it has been executed, the motor controller will have a different speed:

public void testSetSpeed() {
    // Arrange
    SpeedController motor = mock(SpeedController.class);
    Servo servo = mock(Servo.class);
    ShooterSubsystem shooter = new ShooterSubsystem(motor, servo);


    // Act

    // Assert
    assertEquals(0.5, shooter.shooterMotor.get(), 0.001);

First we set up the mock objects and the shooter subsystem. This time we tweak the mock motor a little, specifying that when we get the motor’s speed, then it will return 0.5. The action is to set the speed. Afterwards we check that the speed was really set (and specifying a margin of error of 0.001).

As your tests get more sophisticated, you’ll use the “when” method to add more mock behavior to your mock objects.

The code above is another fairly superficial test, but it does exercise the code and the mock objects.  Let’s consider more features of the mock framework:

Yet another unit test

Let’s test the “reset” method of our subsystem.  In this case we want to verify that the motor has really been stopped and the servo arm has been retracted.

public void testReset() {
    // Arrange
    SpeedController motor = mock(SpeedController.class);
    Servo servo = mock(Servo.class);
    ShooterSubsystem shooter = new ShooterSubsystem(motor, servo);

    // Act

    // Assert
    assertEquals(true, shooter.servoRetracted);

This time there are more lines of code in the “Assert” section.  Besides verifying that the server arm was retracted, we also run two verifications on the mock objects.

The “when” and “verify” features of mock objects are allow some sophisticated tests.  You may see your tests growing with many fiddly mock behaviors.  This is usually OK.  Just make your tests as simple as possible, but no simpler.

Dependency injection

Our ShooterSubsystem depends on two objects created elsewhere, a servo and a motor speed controller.  Those dependent objects are specified in our subsystems constructor.   This pattern is called Dependency Injection.  The tests described above wouldn’t be possible if we weren’t able to inject mock objects into our system-under-test.

Dependency Injection is an important concept within software engineering.  Besides encouraging testability, it supports the concept of Separation of Concerns. This means that we often break a large program into sections that each handle different concerns.  In this case we have one class that handles creation and definition of physical components (typically a RobotMap or RobotTemplate class) and another class that defines the behavior and interaction between those components (our subsystem).

Further Reading


Simple Unit Tests

Every programmer has at one time deployed code without having tested it.  Simple changes go out with the assumption that they can not possibly fail. And then they betray us.  We learn the lesson:  all code must be tested, thoroughly and repeatedly.

On robots we often need the hardware to do some of the testing, but there are still a lot of tests that can be executed offline.  Ideally, you should build up suites of tests that execute automatically; just start one test program and all tests execute.  There are many categories of automated tests, but the most common is called unit testing, because they test small units of functionality, including the functions you assumed can’t fail.

Well crafted unit tests will improve the quality of your software and insure its quality down the road.  You may choose to organize your development around those tests, a practice called Test Driven Development.  Unit tests are also essential to refactoring, which is a systematic technique for improving your code;  you’ll need automated test to verify that your refactored code still works correctly.

Unit testing with WPILib

GradleRIO is already set up for the unit testing frameworks JUnit and GoogleTest.   If you define unit test classes in your project, they will automatically execute every time you build the code.

Let’s define a simple function and create unit tests for it.  Don’t worry that this code looks too simple to merit testing.   Remember that no code is so trivial that it won’t fail.

A simple function

Suppose you’ve got a Gyro installed on your robot.  When you first start it up, the gyro will show 0 degrees.  Rotate the robot a little to the right and it might read 30 degrees.  However, the gyro’s scale is continuous, so after a lot of driving around it might read 1537 degrees or -2781 degrees.  This might mess up the math in some of your autonomous commands, since 1537 degrees is really the same as 97 degrees.  We need a function that simplifies angles into the range -180 to 180.  Here are some test cases:

  • 270 degree is the same as -90 degrees
  • -315 degrees is the same as 45 degrees
  • 30 degrees is still 30 degrees
  • -60 degrees is still -60 degrees

Here’s a simple conversion function.  It definitely isn’t perfect, but we’ll fix that in a minute:

public int simplifyAngle(int angle) {
    if (angle > 180) {
        angle = angle - 360;
    if (angle < -180) {
        angle = angle + 360;
    return angle;

For this example, this function is in your Robot class which is stored with the other java main classes in your “src” directory:


A simple unit test

Add a “test” directory under “src” for your java unit tests.  Right-click on “src”, select “New Folder” and enter “test/java/frc/robot”.  Right-click on “robot” and select create an empty class named “RobotTest.java”


Consider the test method:

public void testSimplifyAngle() {
    Robot robot = newRobot();
    assertEquals(-90, robot.simplifyAngle(270));
    assertEquals(-45, robot.simplifyAngle(315));
    assertEquals(-60, robot.simplifyAngle(-60));
    assertEquals(30, robot.simplifyAngle(30));

The @Test annotation on top means that this method will be executed by the GradleRIO test task.  We create a Robot object and then test our method for each of the test cases.

This test class will execute every time you build robot code.  If any of the assertions fail, the whole build will be rejected. To see what happens on a failure, temporarily change the 30 degree test so it expects -30 degrees. The build will fail and tell you to check line 15:


Improving the function

How many test cases should you use?  Usually more than you would expect, even for simple functions.

Always include a trivial test case, sometimes called the “happy path” case. The 30 degree and -60 degree test might be considered happy path tests, but we could also test 0 degrees.  Add some test scenarios where there are logical transitions; these are called “corner cases”.  For this example, corner tests might be at 180 degrees and -180 degrees.  Also test a couple extreme cases, such as 1537 degrees and -2781 degrees.  Extreme tests at absolute maximums or minimums are called “edge cases”.

Now our test looks like this:

public void testSimplifyAngle() {
    Robot robot=new Robot();
    assertEquals(-90, robot.simplifyAngle(270));
    assertEquals(-45, robot.simplifyAngle(315));
    assertEquals(-60, robot.simplifyAngle(-60));
    assertEquals(30, robot.simplifyAngle(30));
    assertEquals(0, robot.simplifyAngle(0));
    assertEquals(180, robot.simplifyAngle(180));
    assertEquals(-180, robot.simplifyAngle(-180));
    assertEquals(97, robot.simplifyAngle(1537));
    assertEquals(99, robot.simplifyAngle(-2781));

Executing this test reveals that our function fails for the extreme cases.  Our function can’t handle 1537 degrees.  We’ve found a bug in our logic.   We go back to the original function and, after a little thought,  change it to the following:

public int simplifyAngle(int angle) {
    while (angle > 180) {
        angle = angle - 360;
    while (angle < -180) {
        angle = angle + 360;
    return angle;
Now our test passes.  The bug is fixed.


At some point, you or one of your teammates will rewrite parts of the robot code, at which point you must retest and verify that the new code is at least as good as the old.  For instance, someone might refactor the angle simplification like this:

public int simplifyAngle(int angle) {
    return angle > 180 
    ? simplifyAngle(angle - 360) 
    : angle < -180 ? simplifyAngle(angle + 360) : angle;

Does this function do the same job?  It turns out that it does. Is this function better? Well, it is shorter, but you should decide if it’s really more readable.

Eventually, you might stumble on logic like this:

public int simplifyAngle(int angle) {
    return  (int)Math.round(Math.IEEEremainder(angle,360.0));

This is even shorter.  It’s much more cryptic, but it does pass the tests.  You could use any of these functions in your robot.  Unit tests have verified that they all do the same thing.

Writing good tests

Now that you know how to create unit tests, start adding them to your robot projects. You will find that writing good tests is as difficult and subtle a skill as programming the robot code.  You should start watching for opportunities to test.  Break up your big methods into smaller methods and structure them so they are more amenable to testing.  Test the simple things, but especially watch for code that is tricky.

It’s probably possible to write too many tests, but don’t worry about that.  On professional projects the test suites are often larger than the baseline code.

Good unit tests should have the following qualities:

  1. Test the requirements and nothing but requirements.  In the above example we require that 270 degrees is simplified down to -90 degrees.  However, don’t try to craft tests that verify the number of times the “while” loop executes to achieve this.
  2. Tests should be deterministic and always succeed or fail based on the requirements.  Take care around code that depends on hardware or file systems or random functions or timers or large memory usage.  Structure your code so you can manage any randomness.
  3. Unit tests should be fast.  They execute before every build and you don’t want to start regretting how slow they are.
  4. Tests should be easy to read, understand, and maintain later

The above example is intentionally simple.  Once you’ve mastered the concepts you can start to think about automated testing of larger classes, non-trivial state machines,  subsystems and commands.

Further Reading


FRC 2019 – Camera Best Practices

To get the most out of your cameras for the FRC 2019, please consider following these recommendations. This document does not contain the theory for the recommendations. If the theory is desired or for any questions regarding these recommendations, please contact a MN CSA at firstmn.csa@gmail.com or http://firstmncsa.slack.com.

Desired goals that drive these recommendations

  • Low latency
    • Allows driver to react to the most current status with a minimal delay between driver input and robot action cycle time.
  • Low bandwidth usage
    • Reduced risk of driver input being delayed due to high bandwidth.
      • There is a Quality of Service mechanism that should prevent this, but to fully eliminate the risk, reduce bandwidth if possible.
    • Bandwidth target is below 3/Mbs
  • Ease of use

Possible Misconceptions

  • Higher FPS means lower latency.
    • While higher FPS can appear to reduce latency in a video game, that only occurs when the underlying infrastructure can support the desired FPS with minimal latency to begin with.
    • Low latency is a function of the infrastructure’s ability to get data from point a, the camera, to point b, the DS screen, with minimal delays. This can only occur if that infrastructure has available waiting capacity to process, transmit and display the video.
    • Higher FPS can easily overload the underlying infrastructure, which can cause delays at every stage of the point a to point b pipeline, thus increasing the overall latency.
    • Lowering FPS to a level which the infrastructure can handle the pipeline while still maintaining available waiting capacity, will assist in achieving the lowest possible latency.
  • High Resolution is better
    • High resolution is desirable if additional detail allows for a strategic advantage, but for most tasks, lower latency will offer a much better robot control experience.
    • 640×480 is not twice as much as 320×240. It is 4 times as much. The extra time required to process, transmit and display 4 times the data is most likely not going to offset the higher latency and reduce capacity required for its use.
  • This or that device is the right one for all tasks.
    • Not all devices work well in all situations, you should balance the total cost to implement, maintain and configure additional devices before making changes. Cost in this sense means monetary, time, expertise, weight, etc…

Driver Cam

  • Use FRCVision on Raspberry PI instead of cameras hosted on roboRIO
  • URL: https://wpilib.screenstepslive.com/s/currentCS/m/85074/l/1027241-using-the-raspberry-pi-for-frc
  • Benefits:
    • Potential for robot code to respond faster to driver input by offloading CPU intensive task from roboRIO.
    • Lower video latency and higher frame rates due to increased cpu cycles available on Pi.
    • Ability to handle more concurrent streams than a roboRIO.
    • Ability to control stream from FRC shuffleboard and LabView Dashboard.
    • Ability to control Resolution, FPS and compression per camera feed.
    • Ability to have a per camera vision processing pipeline.
    • Multiple language choices for vision processing pipeline.
    • No need to add code for basic camera streaming.
  • Recommended Usage:
    • Driver video streaming.
    • Video processing, target acquisition and tracking.
  • Recommended Equipment:
    • Raspberry Pi 3 B or B+, B+ preferred.
    • Microsoft Lifecam HD-3000
    • Logitech c920, c930, c270, c310
    • Any Linux UVC  supported USB camera that supports MJPEG and desired resolution and fps in camera hardware: http://www.ideasonboard.org/uvc/#devices
  • Optional Equipment:
  • Recommended hardware settings, per camera.
    • Format: MJPEG
    • Resolution: 320×240
    • FPS: 15-20, reduce as needed to reduce Pi cpu usage.
  • Recommended stream settings, per camera
    • Format: MJPEG
    • Resolution: 320×240
    • FPS: 10-15, reduce as needed to lower Pi cpu usage or bandwidth
    • Compression: 30, adjust as needed to get desired cpu usage, bandwidth and clarity.
  • Considerations:
    • Power:, ensure 2.5+ amps of power are available to the Pi, especially if using 3-4 cameras and / or vision processing pipeline is in use.
    • Actual FPS per video stream as listed in the DS view should match set FPS for the stream as configured for that camera, if it does not, lower FPS and / or Resolution or increase compression until actual FPS and set FPS match and video quality and latency is acceptable.
          • Rather than using driver mode, create a “driver” pipeline. Turn down the exposure to reduce stream bandwidth.
          • Using a USB camera? Use the “stream” NT key to enable picture-in-picture mode. This will dramatically reduce stream bandwidth.
          • Turn the stream rate to “low” in the settings page if streaming isn’t critical for driving.
        • Considerations:
          • Do NOT use for driver vision.
          • Use only for target acquisition and tracking.
          • Stream only output of vision pipeline to DS and only if bandwidth allows.

Chris Roadfeldt


2019 Week 2 CSA Notes

Duluth Double DECCer

In Minnesota, the week 2 events were the Lake Superior Regional and the Northern Lights Regional, both held in the Duluth Entertainment Convention Center (the DECC).

Our observations include:

  • All roboRIOs had to be re-imaged to version FRC_roboRIO_2019_v14.  This update was released after stop-build day, so every bagged robot had to be updated.
    If you haven’t yet attended your first 2019 competition, you can prepare for this by updating your laptops with the FRC Update 2019.2.0.
    If you are helping teams at competition with this, it might be a little quicker to give them the FRC_roboRIO_2019_v14 file and reimage their RIO.
  • All Java and C++ projects had to be updated to GradleRIO version 2019.4.1.  GradleRIO version changes always require inital software downloads, so the first build after changing your version must be done while connected to the internet.  It’s far better to do this before the competition, while you have a good network connection.
    If you are helping teams at the competition, you can give them the latest WPILib update.  This update will install all the latest GradleRIO dependencies, minimizing download time.
  • We were expecting camera problems for LabView.  At Duluth, Brandon and Logon did extra duty for all the Labview teams.
  • Two teams who had programmed their robot in Eclipse with the 2018 version of WPILib.
    Fortunately, this was easy to fix.  We installed the latest WPILib to their laptops and then used the import wizard to convert their projects to GradleRIO.
  • As usual, plenty of teams suffered from loose electrical connections.
    Pull-test all your connections; nothing should come loose.  All the wires to your batteries, circuit breaker, and PDP should be completely immovable.
  • If using the Limelight camera, consider their bandwidth best practices.
  • If you are using a Limelight and / or frcvision on Raspberry PI, consider bringing an ethernet switch in order to assist troubleshooting.
  • Turn off the firewall on your laptops.

Loop time override warnings

An important message from Omar Zrien from CTR Electronics came out this weekend.  It addresses some warning messages that teams have been reporting:

  • Watchdog not fed within 0.020000s (see warning below).
  • Loop time of 0.02s overrun.

Anyone who uses CTRE’s components should read all of Omar’s posting, but relevant takeaways are:

  • Install the latest Phoenix firmware and add the corresponding Phoenix.json to your project.
  • Keep an eye on the number of get*, set*, and config* calls you make, since each call might consume a little processor time.
  • Don’t worry too much about the overrun warnings as long as your robot is performing.



Labiew Dashboard Camera Fixes

2019 Camera report

Lake Superior Regional and Northern Lights Regional (Duluth, Minnesota)

The following is a report from the Duluth CSA’s on cameras and the dashboard. As of Saturday Afternoon (Week 2) we have experienced 100% success rate of cameras performing between the 123 teams split between the 2 regionals. Our procedure to get this result will be outlined below.

If the team camera works, we let them go without any changes. This usually included the Limelight, Raspberry Pi, ShuffleBoard, and SmartDashboard. These presented very little problems with the FMS and NT.

For teams encountering issues, LabVIEW teams or teams using a LabVIEW dashboard, the following procedure was done in the pits. If the team was able to connect after any of these steps tethered to the robot we sent them out to the field.

In the Pits:

1. Download the 2019.2.1 LabVIEW Dashboard.

This would get passed on by a flash drive to the team’s driver station from a CSA. The folder would be placed on the desktop with the following path :


2. If LabVIEW team, convert their code to a fresh 2019.2 Project as follows. All projects were named 2019 Duluth Cameras so we could determine which teams we applied this to. Always save when prompted to during this conversion.

a) Start a brand new project like normal



b) Delete the following from the begin VI. Once cleared it will look as follows. (Old Code)


Cleaned (Old Code)



c) Copy everything except the following from begin. (Old Code) and paste in New Code



d) Delete the following from Teleop. (New Code)





e) Copy everything except the following from Teleop (Old Code) and paste in New code




f) Delete the following from Periodic Task (New Code)





g) Copy everything except the following from Periodic Task (Old Code) and paste in New code



h) Delete the following from Autonomous Independent (New Code)





i) Copy everything except the following from Autonomous Independent (Old Code) and paste in New code


We have not discovered what to do with robot global variables at this time. To be on the safe side teams should recreate these in the new project and link them to the appropriate locations manually.


On the field:

Check if NT Communication Light is on


Once that light is green do the following:

  • If one camera, select the camera and wait. On average it would take about 7 seconds to connect.
  • If 2 cameras, select the second camera first. Let this camera boot up. Then select the first camera. It does not matter which side that cameras are on. Each camera on average would take 7 seconds.

If you have any question feel free to contact me at the information below. I hope this helps for future events! We will be doing the same procedure at the Great Northern Regional (North Dakota) and will report back with results from that regional.


Brandon A. Moe

University of Minnesota – Duluth, 2020 Minnesota CSA
FRC Team 7432 NOS Mentor

Personal: moexx399@d.umn.edu


Preparing for Competition

Your robot is complete, so take a day or two to relax.  Soon you’ll need to start thinking about your next competition.   There are a few things you  should prepare for with respect to your control systems.

First things first:  A new version of the FRC 2019 Update Suite was released, so download it.  This is a mandatory update.   Also, take a look at the 2019 FRC Inspection Checklist. The robot inspectors will use this checklist to determine if your robot is legal.

Bring your code

Sometimes we see teams at competition without their robot software.  This can happen if a team only has one programmer who can’t make the trip, or maybe their programming laptop gets misplaced.  Don’t let this happen to you.  Back up your code to a flash drive or keep a recent copy on your driver’s station.  Or, keep your code online.

This will be especially important when you must re-image your roboRIO at the competition, since the re-imaging process will erase all software currently on your RIO.

Your driver’s laptop

The inspection checklist requires that you must use this year’s driver station software (currently version 19.0).   Use the FRC Update Suite to install all new software onto all the drivers laptops that you intend to bring to competition.  It will ask for your serial number.  You can leave the serial number blank and you will get a 30 day evaluation mode.  You should also do the FRC Update on all your programmer’s laptops.

You definitely don’t want your laptops to do Windows auto-updates while at a busy competition.   To avoid this, make sure all your laptops have the latest Windows updates and then put the auto-updates on a temporary pause.  To do this, open up the Windows Settings tool and select “Update & Security”:


From this window check for any updates.  When the updates are done, select “Advanced options” and then turn on “Pause Updates”.  This should prevent your laptop from doing system updates when you need it for driving.



New roboRIO image

Team update 14 requires that all roboRIOs use image FRC_roboRIO_2019_v14.  This image was in the latest FRC Update Suite, so you must use the roboRIO Imaging Tool to update your RIOs.  This update was released after Stop Build day, so every single robot will need to apply this image at their first competition.  After re-imaging, you must redeploy your robot code.

Wait… Before you re-image your roboRIO, make sure you have a copy of your robot source code.

If you do not have your source code, the CSAs may be able to make a copy of your current executable code.  The procedure for this is to connect directly to the roboRIO and retrieve relevant files from your /home/lvuser directory.  You can accomplish this with putty or WinScp.

If you are using TalonSRX or VictorSPX motor controllers controlled from the CAN bus, you install the native libraries.  Get a copy of the Phoenix Tuner and run “Install Phoenix Library/Diagnostics”.

Your codebase

You will also need to update your build.gradle file to work with the v14 RIO image.  Just change the GradleRIO version to “2019.4.1”.  The first few lines of your build.gradle file should look like this:

plugins {
    id "java"
    id "edu.wpi.first.GradleRIO" version "2019.4.1"

You are using GradeRIO and this year’s WPILib software for Java and C++ development, aren’t you?  It’s possible that one or more teams will show up to the competition with code written against last year’s development environment.  For those folks the CSAs (or some friendly teams) will help them covert it.  The procedure for this is to:

Programming at the competition

Gradle tries to make sure you have all the latest support code.  Once a day it will try to connect to central servers to see if you have the latest libraries cached.  This is fine if you always have an internet connection, but it can be a problem if you’re away from wifi.

The solution is to switch to “offline” mode while at a competition.

In Visual Studio Code, select the two options: “WPILib: Change Run Deploy/Debug Command in Offline Mode Setting” and “Change Run Commands Except Deploy/Debug in Offline Mode”.


Eclipse and IntelliJ have offline modes in their Gradle settings.  If you build from the command line, add the “–offline” argument.


The Driver Station Log File Viewer

Every FRC team is familiar with the FRC Driver Station software.  It’s the tool we use to drive our robots, whether in competition or back at the shop.   Any serious driver will have tested every tab and button on this program.  Hopefully, they’ve also read the documentation.

When you installed the driver station, you also got the Driver Station Log Viewer.  The driver station records a lot of information about every driving session, whether in competition or in practice.   I know that some teams make use of the log viewer, but many never touch it, or only open it up when they’re in trouble.   Learning to use it will definitely upgrade your control systems diagnostic skills.

Introducing the log viewer

You can find the log viewer program installed in c:\PROGRA~2\FRCDRI~1, but the easy way to start it is directly from the driver station.   Click on the gear icon and select “View Log File”.


The log viewer will pop up.

In the upper left part of the screen you’ll see a list of all matches and test runs that this driver’s laptop has witnessed, along with the log’s time in seconds.  If you were connected to a competition’s Field Management System, it will display the match name and number.  The driver station starts logging data as soon as it connects, which may be several minutes before your match starts.  FRC matches are always 150 seconds, but most log files contain the pre-match time as well.  If the time is less than 150 seconds, there was probably an error that truncated the log.

Below the log file list is the log directory.  You may switch to another directory if you have a collection of log files from another drive’s laptop.


In the middle of the window you will see the graph of your robot’s critical parameters. Get familiar with the different parameters, their different scales on the left, and the time scale along the bottom.

By dragging a selection on the graph you will zoom in to take a closer look at the data.  Once you’ve started zooming you can use the scroll bar at the bottom to move forwards and backwards in time.  Note the blue and green lines at the top of the graph;  if you zoom in enough they will become individual dots, spaced out at 50 readings per second.  Robot communication runs at 50 cycles per second, so each dot represents one reading.  Note that occasionally a dot will be missing, indicating a lost network packet.  You can zoom back out to the full graph by hitting the “AutoScale” button.

Hitting the “Match Length” button will zoom the graph to exactly 150 seconds.  Then use the scroll bar to position the upper green line on the left edge of the display.

The checkboxes on the upper left let you toggle different parameters.  You can turn off some lines to get a better look at others.  Or, you can turn on fine grained data, such as the electrical current on each PDP channel.  There are two tabs organizing the selectors, either by major groups or by individual plots.


Move your cursor over the graph while watching the Details box in the lower left corner of the window. Message details will give you additional insight into the graph parameters.

A basic log review procedure

Start reading your logs regularly, and you’ll get a sense of what good and bad logs look like for your robot.

Sometimes, you will need to look at the logs of a stranger’s robot.   During a competition, it’s pretty common for the FTA to call up one of the CSAs and say “Something weird happened to that team’s robot.  Go check their logs”.   The following is a basic procedure for evaluating a robot log:

  1. In the upper left corner, select the log file corresponding to match in question.  It’s easy to get the wrong match, so pay attention to the time stamps.   Glance at the graph and then click on a match or two prior to this one for comparison:
    1. Watch for notable differences in the yellow voltage line on the different graphs.  If the voltage in one match dips much lower, it may indicate a bad battery.
    2. Watch the green network latency line or the orange packet loss lines.  If network communication is bad in just one match there may be a problem with another robot, or some radio interference occurred during that match.   If network communication is always bad, your radio might be poorly positioned or might be malfunctioning.   Radios should be mounted horizontally and not be surrounded by metal.
    3. Reselect the match in question.  Look for any gaps in the graph that would indicate that something failed.  A roboRIO reboot creates a gap of about 10 to 15 seconds.  At the time of this writing, a radio reboot creates a gap of between 35 and 45 seconds. (Future radios will behave differently.)  A loose network cable will produce a gap of random length.
  2. Select the “Match Length” button and scroll until the green lines at the top are at the left edge.  Now you are seeing the full match on screen.
  3. The blue and green lines at the top of the graph are the “Robot Mode” indicators.

    1. The green lines on top are the autonomous period and blue lines are the teleoperated period.  You may notice a tiny gray line between green and blue indicating that your robot was in disabled mode for an instant.
    2. The blue and green lines on top were transmitted from the robot, and they indicate what your robot thought the operating mode was.  Below these lines are the DS mode lines, indicating the operating mode of the driver station.
      The robot mode lines should match the DS mode lines and there should be no gaps.
  4. Below the mode lines is a row of dots which are event markers in the Event List.  If you trace your cursor across the dots the text messages will appear in the Details window.
    1. The green, yellow and red markers are log messages generated by the underlying WPILib framework.  Also, anything your robot code prints will appear as a marker dot.
    2. You might see brown markers, indicating a brownout event, indicating that the robot voltage fell below 6.8 volts.
    3. You might see purple watchdog markers, indicating that a MotorSafety object has gone too long without a signal, and has therefore been disabled temporarily.
  5. The big yellow graph is the battery voltage as recorded at the PDP.  Voltage should vary in the range between 12.5 and 8 volts.   Take note of the voltage before the match; a starting voltage below 12 indicates that an uncharged battery was installed.
    If there are times in the match were the robot stops for a moment, the graph will go flat.   If the voltage goes too low, the robot may experience a brownout.  Different batteries may go lower or may lose voltage quicker.
  6. The red line shows the roboRIO CPU utilization.  I have never seen a problem with this graph, but a spike here might indicate that excessive processing is taking place, and might cause a watchdog error.
    Interestingly, autonomous code usually requires less CPU than teleoperated code.
  7. A gray line shows the traffic load on the CAN bus.  I have never seen a problem with this and it’s always a uniformly jaggy line.
  8. The green and yellow lines at the bottom of the graph are the “Comms” group of statistics.   They show the health of your network communication.  Spikes in these graphs are common, so don’t worry unless you see bad network traffic for more than a couple seconds.

    1. The green line shows network latency measured in milliseconds.  Hover your cursor over the lines to see the exact values.
      Typical trip times will be in the range of 5 to 20 ms.  Spikes of up to 60 ms are common.
    2. The orange line shows network packet loss in average packets lost per second.
      Losing 3 to 5 packets per second is pretty common.
  9. You can also view graphs of the current from the PDP.  You can enable groups of channels (such as channels 0 through 3), or individual PDP channel plots.  You may need to trace PDP channels back to specific motors to understand the output.
    Spikes in current may indicate motor stalls.  Watch for conditions where circuit breakers tripped.  Try comparing similar motors, such as the drive train motors, to see if any channel looks significantly different.
  10. At the top of the window is a tab labeled “Event List”.  Selecting it switches the display to show the text logs generated during your match.  Each line in this display corresponds to one of the “event marker” dots we discussed earlier.
    There’s a lot of color coding in this display.  The timestamps on the left are colored gray or green or blue denoting the disabled / autonomous / teleop modes.  Any line containing the word “warning” will be colored orange and any line containing “error” will be red.

    1. If you had seen a problem in the data graph display, you can look at the events list for the same time period, to get clues about what happened.
    2. The list will contain messages from roboRIO.  There are informational logs about memory and disk capacity.  Pay especial attention to orange warning messages about “Ping Results”;  they tell you robot components were working, helping you diagnose network communication problems.   If your robot ever throws an Exception, it will be displayed as a red error message.
    3. Your robot software can also generate event logs.  Anything that your code prints to standard output will appear in the events logs.  You may choose to print out messages about what the robot is doing.  Print out when the robot does important things or when any commands are executed.  Print out your air pressure or some specific states your robot goes into.  This can be useful in general, but especially valuable when diagnosing an error condition.
      In 2018’s game, the FMS transmitted random game data at the beginning of each match, which many teams used to pick different autonomous routines.  Printing out the game data and the autonomous choices was useful for post-match analysis.

Specific problems to investigate


One of the most important problems you can find in the logs are brownout conditions, where the voltage falls too low. When the voltage starts falling below 6.8 volts, the roboRIO will protect its own existence by disabling motor outputs.

  1. The most common cause of brownouts is bad batteries or uncharged batteries.  Note if brownouts correlate to certain batteries.
  2. Brownouts may also be caused by shorts and loose connections.  In particular, look for loose wires on the battery connections, the main breaker connections, and all the PDP connections.
    These brownouts may happen in every match.  They may correlate to violent actions.   Pull test all connections and otherwise check over the wiring.
  3. Binding in the system may cause brownouts.   Reduce the friction on everything.
  4. Too many motors can consume too much current.  See if brownouts correlate to actions that use many motors.  Consider increasing the ramp rate of your motor controllers.  Ramp rate is measured in the time it takes to go from no power to maximum power.

About motor safety / watchdog errors

One message you may see in the event logs or on the console is “Output not updated often enough”, which indicates that one of your motors is not getting signals often enough.  Drive motor controllers are MotorSafety objects, and they will shut the motors down if they aren’t constantly fed signals.  This message usually means that some other part of your software is taking too much time.

Further Reading: