Uncategorized

Unit Testing Commands

The WPILib command framework divides your robot program into two types of classes: subsystems and commands.  The subsystem classes represent the major physical parts of the robot, such as a shooter subsystem, a drive-train subsystem, or a manipulator arm subsystem.  The command classes define the actions taken by the subsystems, such as shooting a ball, moving the drive-train, or raising the manipulator arm.

unit_test_commands

Most of your programming time will go into creating, refining and debugging new commands.  Commands will be the most sophisticated part of your code.  Therefore they also have the greatest risk of going wrong.  Therefore you should spend a lot of time testing your commands.

So far we have tested simple functions and verified the primitive functionality in subsystems.  The next step is to created automated tests for your commands.

Testing a simple Command

Our simple example robot contains a Shooter subsystem that shoots balls.  The ShooterSubsystem has a high-speed wheel for throwing the ball, and a servo arm that can raise the ball up until it touches the wheel.  We will need a command to set the wheel speed, and another to control the servo arm.

A simple Command

Here is the command to raise or lower the servo arm:

package frc.robot.commands;

import edu.wpi.first.wpilibj.experimental.command.*;
import frc.robot.subsystems.*;

public class ShooterServoArmCommand extends SendableCommandBase {

  private final boolean fire;
  private final ShooterSubsystem shooter;

  public ShooterServoArmCommand(boolean fireArm, ShooterSubsystem shooterSubsystem) {
    fire = fireArm;
    shooter = shooterSubsystem;
    addRequirements(shooter);
  }

  @Override
  public void execute() {
    if (fire) {
      shooter.fire();
    } else {
      shooter.retract();
    }
  }

  @Override
  public boolean isFinished() {
    return true;
  }
}

Take note of the two parameters on the constructor:  fireArm and shooterSubsystem.   This command can either raise the arm or lower it, depending on whether the fireArm parameter is true or false.

By specifying the shooterSubsytem in the constructor we are using Dependency Injection, which makes the code more reusable and more testable.  When testing, we can replace the real subsystems with mock objects that fake the subsystem’s functionality.

A simple Test

Our task does two different things: retract and fire. First let’s test that firing the ball works:

package frc.robot.commands;

import edu.wpi.first.wpilibj.experimental.command.*;
import frc.robot.subsystems.*;
import org.junit.*;

import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.verify;

public class ShooterServoArmCommandTest {

    private CommandScheduler scheduler = null;

    @Before
    public void setup() {
        scheduler = CommandScheduler.getInstance();
    }

    @Test
    public void testFireArm() {
        // Arrange
        ShooterSubsystem shooter = mock(ShooterSubsystem.class);
        ShooterServoArmCommand fireCommand 
                = new ShooterServoArmCommand(true, shooter);

        // Act
        scheduler.schedule(fireCommand);
        scheduler.run();

        // Assert
        verify(shooter).fire();
    }
}

The test follows our Arrange / Act / Assert pattern:

  • We create a mock version of our ShooterSubsystem.  If we wanted, we could also define some mock behaviors at this point.
    We create the actual command we will test.  In this case we set the fireArm parameter to true, indicating that we want to fire the ball.
  • In the command framework, we never explicitly execute the command methods.  Instead, we “put it on the schedule”.   After this, the command scheduler will run the methods appropriately.  On a real robot, the scheduler tries to run all scheduled commands every 20 milliseconds.
    In this case we know that  our command will only run once before it’s done.
  • At the end of the test, we ask the mock framework to verify that the shooter’s “fire” command was called exactly once.

Unit tests will all execute whenever we build the code.  Go ahead and execute the “Build Robot Code” action within Visual Studio code.  Next write a similar test to verify that the command also correctly retracts the servo arm:

@Test
public void testRetractArm() {
    // Arrange
    ShooterSubsystem shooter = mock(ShooterSubsystem.class);
    ShooterServoArmCommand retractCommand = new ShooterServoArmCommand(false, shooter);

    // Act
    scheduler.schedule(retractCommand);
    scheduler.run();

    // Assert
    verify(shooter).retract();
}

 

Testing a Command Group

Simple commands can be grouped together to run sequentially or in parallel as more complicated commands.

A more complex Command

For instance, actually shooting a ball is a sequence of steps:

package frc.robot.commands;

import edu.wpi.first.wpilibj.experimental.command.*;
import frc.robot.subsystems.*;

public class AutoShootCommand extends SequentialCommandGroup {
    public AutoShootCommand(ShooterSubsystem shooter) {
        super(
                new PrintCommand("BEGIN: AutoShootCommand"),
                new ShooterServoArmCommand(false, shooter),
                new ShooterSetSpeedCommand(1.0, shooter),
                new WaitCommand(0.5),
                new ShooterServoArmCommand(true, shooter),
                new WaitCommand(0.5),
                new ShooterSetSpeedCommand(0.0, shooter),
                new ShooterServoArmCommand(false, shooter),
                new PrintCommand("END: AutoShootCommand")
        );
    }
}

Note that we are again using dependency injection, but that the same ShooterSubsystem will be used in all the internal commands.

Besides the shooter commands, we’ve also thrown in a couple of PrintCommands.  These commands print out to the console at the beginning and end of the command.  They also print to the Log File Viewer to be reviewed after a match.

Also we’ve thrown in a couple of WaitCommands, which give the shooter wheel half a second to spin up before shooting and then maintain speed while the ball is firing.

Testing a Command Group

A command group test follows the same pattern as simpler tests:

package frc.robot.commands;

import static org.junit.Assert.*;
import static org.mockito.Mockito.*;

import org.junit.*;

import edu.wpi.first.wpilibj.experimental.command.CommandScheduler;
import frc.robot.subsystems.ShooterSubsystem;

public class AutoShootCommandTest {

    private CommandScheduler scheduler = null;

    @Before
    public void setup() {
        scheduler = CommandScheduler.getInstance();
    }

    @Test
    public void testShoot() throws InterruptedException {
        // Arrange
        ShooterSubsystem shooter = mock(ShooterSubsystem.class);
        AutoShootCommand command = new AutoShootCommand(shooter);

        // Act
        scheduler.schedule(command);
        for (int i=0; i<100; i++) {
            scheduler.run();
            Thread.sleep(20);
        }

        // Assert
        verify(shooter, times(2)).retract();
        verify(shooter, times(1)).fire();
        verify(shooter).setSpeed(1.0);
        verify(shooter).setSpeed(0.0);
    }
}

This command takes many run cycles, so run it many times, pausing 20 milliseconds between each execution.

After executing everything in the command group, we verify that the subsystem experienced all the actions for shooting.

Writing quality tests

It’s important to remember why we do unit testing.: we create suites of automated tests to improve the quality of our software.  Writing quality tests is a big subject and these last three articles have covered a lot of ground.  It would be easy to be overwhelmed, or in fact dubious, with all of this.  So keep your eye on the end goal:  software quality.

In a sense, writing methodical tests is a stepping stone from just programming into Software Engineering. Engineering means using systematic and disciplined practices when creating things.  Your tests will verify and quantify your software quality, in way that others can read and evaluate.

Further Reading

 

Uncategorized

Simple Unit Tests

Every programmer has at one time deployed code without having tested it.  Simple changes go out with the assumption that they can not possibly fail. And then they betray us.  We learn the lesson:  all code must be tested, thoroughly and repeatedly.

On robots we often need the hardware to do some of the testing, but there are still a lot of tests that can be executed offline.  Ideally, you should build up suites of tests that execute automatically; just start one test program and all tests execute.  There are many categories of automated tests, but the most common is called unit testing, because they test small units of functionality, including the functions you assumed can’t fail.

Well crafted unit tests will improve the quality of your software and insure its quality down the road.  You may choose to organize your development around those tests, a practice called Test Driven Development.  Unit tests are also essential to refactoring, which is a systematic technique for improving your code;  you’ll need automated test to verify that your refactored code still works correctly.

Unit testing with WPILib

GradleRIO is already set up for the unit testing frameworks JUnit and GoogleTest.   If you define unit test classes in your project, they will automatically execute every time you build the code.

Let’s define a simple function and create unit tests for it.  Don’t worry that this code looks too simple to merit testing.   Remember that no code is so trivial that it won’t fail.

A simple function

Suppose you’ve got a Gyro installed on your robot.  When you first start it up, the gyro will show 0 degrees.  Rotate the robot a little to the right and it might read 30 degrees.  However, the gyro’s scale is continuous, so after a lot of driving around it might read 1537 degrees or -2781 degrees.  This might mess up the math in some of your autonomous commands, since 1537 degrees is really the same as 97 degrees.  We need a function that simplifies angles into the range -180 to 180.  Here are some test cases:

  • 270 degree is the same as -90 degrees
  • -315 degrees is the same as 45 degrees
  • 30 degrees is still 30 degrees
  • -60 degrees is still -60 degrees

Here’s a simple conversion function.  It definitely isn’t perfect, but we’ll fix that in a minute:

public int simplifyAngle(int angle) {
    if (angle > 180) {
        angle = angle - 360;
    }
    if (angle < -180) {
        angle = angle + 360;
    }
    return angle;
}

For this example, this function is in your Robot class which is stored with the other java main classes in your “src” directory:

unit_test_func1

A simple unit test

Add a “test” directory under “src” for your java unit tests.  Right-click on “src”, select “New Folder” and enter “test/java/frc/robot”.  Right-click on “robot” and select create an empty class named “RobotTest.java”

unit_test_test1.png

Consider the test method:

@Test
public void testSimplifyAngle() {
    Robot robot = newRobot();
    assertEquals(-90, robot.simplifyAngle(270));
    assertEquals(-45, robot.simplifyAngle(315));
    assertEquals(-60, robot.simplifyAngle(-60));
    assertEquals(30, robot.simplifyAngle(30));
}

The @Test annotation on top means that this method will be executed by the GradleRIO test task.  We create a Robot object and then test our method for each of the test cases.

This test class will execute every time you build robot code.  If any of the assertions fail, the whole build will be rejected. To see what happens on a failure, temporarily change the 30 degree test so it expects -30 degrees. The build will fail and tell you to check line 15:

unit_test_fail1

Improving the function

How many test cases should you use?  Usually more than you would expect, even for simple functions.

Always include a trivial test case, sometimes called the “happy path” case. The 30 degree and -60 degree test might be considered happy path tests, but we could also test 0 degrees.  Add some test scenarios where there are logical transitions; these are called “corner cases”.  For this example, corner tests might be at 180 degrees and -180 degrees.  Also test a couple extreme cases, such as 1537 degrees and -2781 degrees.  Extreme tests at absolute maximums or minimums are called “edge cases”.

Now our test looks like this:

@Test
public void testSimplifyAngle() {
    Robot robot=new Robot();
    assertEquals(-90, robot.simplifyAngle(270));
    assertEquals(-45, robot.simplifyAngle(315));
    assertEquals(-60, robot.simplifyAngle(-60));
    assertEquals(30, robot.simplifyAngle(30));
    assertEquals(0, robot.simplifyAngle(0));
    assertEquals(180, robot.simplifyAngle(180));
    assertEquals(-180, robot.simplifyAngle(-180));
    assertEquals(97, robot.simplifyAngle(1537));
    assertEquals(99, robot.simplifyAngle(-2781));
}

Executing this test reveals that our function fails for the extreme cases.  Our function can’t handle 1537 degrees.  We’ve found a bug in our logic.   We go back to the original function and, after a little thought,  change it to the following:

public int simplifyAngle(int angle) {
    while (angle > 180) {
        angle = angle - 360;
    }
    while (angle < -180) {
        angle = angle + 360;
    }
    return angle;
}
Now our test passes.  The bug is fixed.

Refactoring

At some point, you or one of your teammates will rewrite parts of the robot code, at which point you must retest and verify that the new code is at least as good as the old.  For instance, someone might refactor the angle simplification like this:

public int simplifyAngle(int angle) {
    return angle > 180 
    ? simplifyAngle(angle - 360) 
    : angle < -180 ? simplifyAngle(angle + 360) : angle;
}

Does this function do the same job?  It turns out that it does. Is this function better? Well, it is shorter, but you should decide if it’s really more readable.

Eventually, you might stumble on logic like this:

public int simplifyAngle(int angle) {
    return  (int)Math.round(Math.IEEEremainder(angle,360.0));
}

This is even shorter.  It’s much more cryptic, but it does pass the tests.  You could use any of these functions in your robot.  Unit tests have verified that they all do the same thing.

Writing good tests

Now that you know how to create unit tests, start adding them to your robot projects. You will find that writing good tests is as difficult and subtle a skill as programming the robot code.  You should start watching for opportunities to test.  Break up your big methods into smaller methods and structure them so they are more amenable to testing.  Test the simple things, but especially watch for code that is tricky.

It’s probably possible to write too many tests, but don’t worry about that.  On professional projects the test suites are often larger than the baseline code.

Good unit tests should have the following qualities:

  1. Test the requirements and nothing but requirements.  In the above example we require that 270 degrees is simplified down to -90 degrees.  However, don’t try to craft tests that verify the number of times the “while” loop executes to achieve this.
  2. Tests should be deterministic and always succeed or fail based on the requirements.  Take care around code that depends on hardware or file systems or random functions or timers or large memory usage.  Structure your code so you can manage any randomness.
  3. Unit tests should be fast.  They execute before every build and you don’t want to start regretting how slow they are.
  4. Tests should be easy to read, understand, and maintain later

The above example is intentionally simple.  Once you’ve mastered the concepts you can start to think about automated testing of larger classes, non-trivial state machines,  subsystems and commands.

Further Reading

Uncategorized

FRC 2019 – Camera Best Practices

To get the most out of your cameras for the FRC 2019, please consider following these recommendations. This document does not contain the theory for the recommendations. If the theory is desired or for any questions regarding these recommendations, please contact a MN CSA at firstmn.csa@gmail.com or http://firstmncsa.slack.com.

Desired goals that drive these recommendations

  • Low latency
    • Allows driver to react to the most current status with a minimal delay between driver input and robot action cycle time.
  • Low bandwidth usage
    • Reduced risk of driver input being delayed due to high bandwidth.
      • There is a Quality of Service mechanism that should prevent this, but to fully eliminate the risk, reduce bandwidth if possible.
    • Bandwidth target is below 3/Mbs
  • Ease of use

 

Possible Misconceptions

  • Higher FPS means lower latency.
    • While higher FPS can appear to reduce latency in a video game, that only occurs when the underlying infrastructure can support the desired FPS with minimal latency to begin with.
    • Low latency is a function of the infrastructure’s ability to get data from point a, the camera, to point b, the DS screen, with minimal delays. This can only occur if that infrastructure has available waiting capacity to process, transmit and display the video.
    • Higher FPS can easily overload the underlying infrastructure, which can cause delays at every stage of the point a to point b pipeline, thus increasing the overall latency.
    • Lowering FPS to a level which the infrastructure can handle the pipeline while still maintaining available waiting capacity, will assist in achieving the lowest possible latency.
  • High Resolution is better
    • High resolution is desirable if additional detail allows for a strategic advantage, but for most tasks, lower latency will offer a much better robot control experience.
    • 640×480 is not twice as much as 320×240. It is 4 times as much. The extra time required to process, transmit and display 4 times the data is most likely not going to offset the higher latency and reduce capacity required for its use.
  • This or that device is the right one for all tasks.
    • Not all devices work well in all situations, you should balance the total cost to implement, maintain and configure additional devices before making changes. Cost in this sense means monetary, time, expertise, weight, etc…

 

Driver Cam

  • Use FRCVision on Raspberry PI instead of cameras hosted on roboRIO
  • URL: https://wpilib.screenstepslive.com/s/currentCS/m/85074/l/1027241-using-the-raspberry-pi-for-frc
  • Benefits:
    • Potential for robot code to respond faster to driver input by offloading CPU intensive task from roboRIO.
    • Lower video latency and higher frame rates due to increased cpu cycles available on Pi.
    • Ability to handle more concurrent streams than a roboRIO.
    • Ability to control stream from FRC shuffleboard and LabView Dashboard.
    • Ability to control Resolution, FPS and compression per camera feed.
    • Ability to have a per camera vision processing pipeline.
    • Multiple language choices for vision processing pipeline.
    • No need to add code for basic camera streaming.
  • Recommended Usage:
    • Driver video streaming.
    • Video processing, target acquisition and tracking.
  • Recommended Equipment:
    • Raspberry Pi 3 B or B+, B+ preferred.
    • Microsoft Lifecam HD-3000
    • Logitech c920, c930, c270, c310
    • Any Linux UVC  supported USB camera that supports MJPEG and desired resolution and fps in camera hardware: http://www.ideasonboard.org/uvc/#devices
  • Optional Equipment:
  • Recommended hardware settings, per camera.
    • Format: MJPEG
    • Resolution: 320×240
    • FPS: 15-20, reduce as needed to reduce Pi cpu usage.
  • Recommended stream settings, per camera
    • Format: MJPEG
    • Resolution: 320×240
    • FPS: 10-15, reduce as needed to lower Pi cpu usage or bandwidth
    • Compression: 30, adjust as needed to get desired cpu usage, bandwidth and clarity.
  • Considerations:
    • Power:, ensure 2.5+ amps of power are available to the Pi, especially if using 3-4 cameras and / or vision processing pipeline is in use.
    • Actual FPS per video stream as listed in the DS view should match set FPS for the stream as configured for that camera, if it does not, lower FPS and / or Resolution or increase compression until actual FPS and set FPS match and video quality and latency is acceptable.
          • Rather than using driver mode, create a “driver” pipeline. Turn down the exposure to reduce stream bandwidth.
          • Using a USB camera? Use the “stream” NT key to enable picture-in-picture mode. This will dramatically reduce stream bandwidth.
          • Turn the stream rate to “low” in the settings page if streaming isn’t critical for driving.
        • Considerations:
          • Do NOT use for driver vision.
          • Use only for target acquisition and tracking.
          • Stream only output of vision pipeline to DS and only if bandwidth allows.

Chris Roadfeldt

 

Uncategorized

2019 Week 2 CSA Notes

Duluth Double DECCer

In Minnesota, the week 2 events were the Lake Superior Regional and the Northern Lights Regional, both held in the Duluth Entertainment Convention Center (the DECC).

Our observations include:

  • All roboRIOs had to be re-imaged to version FRC_roboRIO_2019_v14.  This update was released after stop-build day, so every bagged robot had to be updated.
    If you haven’t yet attended your first 2019 competition, you can prepare for this by updating your laptops with the FRC Update 2019.2.0.
    If you are helping teams at competition with this, it might be a little quicker to give them the FRC_roboRIO_2019_v14 file and reimage their RIO.
  • All Java and C++ projects had to be updated to GradleRIO version 2019.4.1.  GradleRIO version changes always require inital software downloads, so the first build after changing your version must be done while connected to the internet.  It’s far better to do this before the competition, while you have a good network connection.
    If you are helping teams at the competition, you can give them the latest WPILib update.  This update will install all the latest GradleRIO dependencies, minimizing download time.
  • We were expecting camera problems for LabView.  At Duluth, Brandon and Logon did extra duty for all the Labview teams.
  • Two teams who had programmed their robot in Eclipse with the 2018 version of WPILib.
    Fortunately, this was easy to fix.  We installed the latest WPILib to their laptops and then used the import wizard to convert their projects to GradleRIO.
  • As usual, plenty of teams suffered from loose electrical connections.
    Pull-test all your connections; nothing should come loose.  All the wires to your batteries, circuit breaker, and PDP should be completely immovable.
  • If using the Limelight camera, consider their bandwidth best practices.
  • If you are using a Limelight and / or frcvision on Raspberry PI, consider bringing an ethernet switch in order to assist troubleshooting.
  • Turn off the firewall on your laptops.
Uncategorized

Loop time override warnings

An important message from Omar Zrien from CTR Electronics came out this weekend.  It addresses some warning messages that teams have been reporting:

  • Watchdog not fed within 0.020000s (see warning below).
  • Loop time of 0.02s overrun.

Anyone who uses CTRE’s components should read all of Omar’s posting, but relevant takeaways are:

  • Install the latest Phoenix firmware and add the corresponding Phoenix.json to your project.
  • Keep an eye on the number of get*, set*, and config* calls you make, since each call might consume a little processor time.
  • Don’t worry too much about the overrun warnings as long as your robot is performing.

 

Uncategorized

Labiew Dashboard Camera Fixes

2019 Camera report

Lake Superior Regional and Northern Lights Regional (Duluth, Minnesota)

The following is a report from the Duluth CSA’s on cameras and the dashboard. As of Saturday Afternoon (Week 2) we have experienced 100% success rate of cameras performing between the 123 teams split between the 2 regionals. Our procedure to get this result will be outlined below.

If the team camera works, we let them go without any changes. This usually included the Limelight, Raspberry Pi, ShuffleBoard, and SmartDashboard. These presented very little problems with the FMS and NT.

For teams encountering issues, LabVIEW teams or teams using a LabVIEW dashboard, the following procedure was done in the pits. If the team was able to connect after any of these steps tethered to the robot we sent them out to the field.

In the Pits:

1. Download the 2019.2.1 LabVIEW Dashboard.

This would get passed on by a flash drive to the team’s driver station from a CSA. The folder would be placed on the desktop with the following path :

C:\Users\pcibam\Desktop\FRC_Dashboard\Dashboard.exe

2. If LabVIEW team, convert their code to a fresh 2019.2 Project as follows. All projects were named 2019 Duluth Cameras so we could determine which teams we applied this to. Always save when prompted to during this conversion.

a) Start a brand new project like normal

lv_cam_1_new

 

b) Delete the following from the begin VI. Once cleared it will look as follows. (Old Code)

lv_cam_2

Cleaned (Old Code)

lv_cam_3_cleaned

 

c) Copy everything except the following from begin. (Old Code) and paste in New Code

lv_cam_4_paste_in_new

 

d) Delete the following from Teleop. (New Code)

lv_cam_5_delete_from_teleop

Cleaned

lv_cam_6_cleaned

 

e) Copy everything except the following from Teleop (Old Code) and paste in New code

lv_cam_6a_cam_copy

 

 

f) Delete the following from Periodic Task (New Code)

lv_cam_7_delete_from_periodic

Cleaned

lv_cam_8_cleaned

 

g) Copy everything except the following from Periodic Task (Old Code) and paste in New code

lv_cam_8_copy_except

 

h) Delete the following from Autonomous Independent (New Code)

lv_cam_9_delete_from_auton

Cleaned

lv_cam_10_cleaned

 

i) Copy everything except the following from Autonomous Independent (Old Code) and paste in New code

lv_cam_11

We have not discovered what to do with robot global variables at this time. To be on the safe side teams should recreate these in the new project and link them to the appropriate locations manually.

 

On the field:

Check if NT Communication Light is on

lv_cam_12_dashboard

Once that light is green do the following:

  • If one camera, select the camera and wait. On average it would take about 7 seconds to connect.
  • If 2 cameras, select the second camera first. Let this camera boot up. Then select the first camera. It does not matter which side that cameras are on. Each camera on average would take 7 seconds.

If you have any question feel free to contact me at the information below. I hope this helps for future events! We will be doing the same procedure at the Great Northern Regional (North Dakota) and will report back with results from that regional.

 

Brandon A. Moe

University of Minnesota – Duluth, 2020 Minnesota CSA
FRC Team 7432 NOS Mentor

Personal: moexx399@d.umn.edu

Uncategorized

Preparing for Competition

Your robot is complete, so take a day or two to relax.  Soon you’ll need to start thinking about your next competition.   There are a few things you  should prepare for with respect to your control systems.

First things first:  A new version of the FRC 2019 Update Suite was released, so download it.  This is a mandatory update.   Also, take a look at the 2019 FRC Inspection Checklist. The robot inspectors will use this checklist to determine if your robot is legal.

Bring your code

Sometimes we see teams at competition without their robot software.  This can happen if a team only has one programmer who can’t make the trip, or maybe their programming laptop gets misplaced.  Don’t let this happen to you.  Back up your code to a flash drive or keep a recent copy on your driver’s station.  Or, keep your code online.

This will be especially important when you must re-image your roboRIO at the competition, since the re-imaging process will erase all software currently on your RIO.

Your driver’s laptop

The inspection checklist requires that you must use this year’s driver station software (currently version 19.0).   Use the FRC Update Suite to install all new software onto all the drivers laptops that you intend to bring to competition.  It will ask for your serial number.  You can leave the serial number blank and you will get a 30 day evaluation mode.  You should also do the FRC Update on all your programmer’s laptops.

You definitely don’t want your laptops to do Windows auto-updates while at a busy competition.   To avoid this, make sure all your laptops have the latest Windows updates and then put the auto-updates on a temporary pause.  To do this, open up the Windows Settings tool and select “Update & Security”:

prep_settings

From this window check for any updates.  When the updates are done, select “Advanced options” and then turn on “Pause Updates”.  This should prevent your laptop from doing system updates when you need it for driving.

prep_pause

 

New roboRIO image

Team update 14 requires that all roboRIOs use image FRC_roboRIO_2019_v14.  This image was in the latest FRC Update Suite, so you must use the roboRIO Imaging Tool to update your RIOs.  This update was released after Stop Build day, so every single robot will need to apply this image at their first competition.  After re-imaging, you must redeploy your robot code.

Wait… Before you re-image your roboRIO, make sure you have a copy of your robot source code.

If you do not have your source code, the CSAs may be able to make a copy of your current executable code.  The procedure for this is to connect directly to the roboRIO and retrieve relevant files from your /home/lvuser directory.  You can accomplish this with putty or WinScp.

If you are using TalonSRX or VictorSPX motor controllers controlled from the CAN bus, you install the native libraries.  Get a copy of the Phoenix Tuner and run “Install Phoenix Library/Diagnostics”.

Your codebase

You will also need to update your build.gradle file to work with the v14 RIO image.  Just change the GradleRIO version to “2019.4.1”.  The first few lines of your build.gradle file should look like this:

plugins {
    id "java"
    id "edu.wpi.first.GradleRIO" version "2019.4.1"
}

You are using GradeRIO and this year’s WPILib software for Java and C++ development, aren’t you?  It’s possible that one or more teams will show up to the competition with code written against last year’s development environment.  For those folks the CSAs (or some friendly teams) will help them covert it.  The procedure for this is to:

Programming at the competition

Gradle tries to make sure you have all the latest support code.  Once a day it will try to connect to central servers to see if you have the latest libraries cached.  This is fine if you always have an internet connection, but it can be a problem if you’re away from wifi.

The solution is to switch to “offline” mode while at a competition.

In Visual Studio Code, select the two options: “WPILib: Change Run Deploy/Debug Command in Offline Mode Setting” and “Change Run Commands Except Deploy/Debug in Offline Mode”.

prep_offline

Eclipse and IntelliJ have offline modes in their Gradle settings.  If you build from the command line, add the “–offline” argument.