Here is a link to a Haiku Deck I created that summarizes the skills I learned.
In the weeks concluding the third quarter, I accomplished several goals. While accomplishing goals is great, the skills I developed in attaining these goals were more important. My first goal was printing out a three-dimensional card holder for help desk. After hours of messing around and learning how to use Sketchup, we were finally able to produce a finished product. I incorporated problem-solving into my endeavor by learning how to work with 3D modeling software and learning to import and export STL files for 3D printing.
In addition, I worked on my Individual Learning Endeavor by developing a scouting app for Robotics. At Robotics competitions, scouting is when one keeps track of how other robots are doing in their matches to potentially select them when it comes to the alliance selection process if one’s team finish’s as the top eight seed. Although we did not expect to finish top eight, in the event that we did, it was integral that we had data ready to interpret that could land us with a powerful alliance and first place victory. In less than 24 hours, Harsh, Shams, Lynn, and I set out to develop a app that would collect match data, save it over time, and sync it to a database for later analysis. One problem we had to overcome was to make it usable on all devices, whether it be iOS, Android, or a PC. Another problem is that we were unsure whether we would be given WiFi in the arena at Northeastern so we had to prepare for there not to be WFi. We decided to create a web application using AngularJS for the front-end development and set up Parse to be our back-end service provider. AngularJS is used to develop single-page applications. AngularJS was the perfect solution for a framework that would work on all devices, and also solved our issue of the potential of having no WiFi at competition.
That’s where we called Xin Zhang, the statistician, to construct a rating system based on the different contributions robots make on the field. He developed his own system of rating robots titled RNL. A brief excerpt of his contribution and work can be seen below.
A much talked about statistic among FRC teams is the OPR rating – the expected contribution of each robot in a match (an OPR rating of 60 means the robot is expected to put up 60 points in a match). After the Nashua competitions, we were 5th in OPR among the ~40 robots that competed. While that is something to be proud of, a number of members, including myself, and this guy who is much smarter than I am and crunched a bunch of numbers, found that OPR is not a great predictor of success, especially in the tournament:
- Team 131, with the highest OPR of any team at the event (76.02), was eliminated in the final round. (probably not a strong case for my argument but this team is a huge outlier)
- Team 811, with the 2nd highest OPR (44.44), did not make it out the first round.
- Team 4909 (hi Billerica), with the 3rd highest (42.97), did not make it out the first round.
- Team 182, with the 4th highest (42.44), was eliminated in the quarterfinals.
- And of course, Team 2876, with an OPR of 41.09, did not make it out of the first round. 😦
What does this mean? While teams with high OPR are often drafted by the top eight, or are one of the top eight themselves, OPR is horrible at predicting the outcome of a match. The previously linked study also concluded that OPR is only useful about less than half the time, a horrible rate for such a widely distributed statistic.
Why does OPR fail half the time? Here are some hypotheses:
1. OPR is derived without team-specific statistics and instead rely on alliance statistics, and therefore it is dependent on a team’s alliance members. Some inaccuracies are bound to occur.
2. OPR admittedly does not factor in defense. Alliances in the tournament like to get defensive, and that might have something to do with the early exits of the top OPR teams at Nashua.
3. Matches simply are not predictable. The Red Sox finished with 93 losses in 2012 – they won the World Series in 2013. The Celtics can’t start a winning streak to save their lives, yet can’t lose to the Miami Heat. Same goes with robots at FRC competitions.
I set out to create my own statistic, something that will hopefully eclipse the paltry <50% prediction rate of OPR. Because I am not creative and therefore cannot invent a cool name for this statistic, I have settled with “Rating to be Named Later”, or RNL. Voila:
H = number of high goals made
L = number of low goals made
T = number of truss assists
AU = number of autonomous goals (this will usually be just 1, but there are teams capable of scoring two)
AS = number of assists in-game
C = constant (for giggles)
According to some scattered posts floating around on Chief Delphi, OPR is calculated through a system of 160 equations, 40 variables, with matrices involved – complicated enough that even Microsoft Excel could not calculate these values without the use of macros. Compared to that, RNL looks dead simple – this is because team specific data is used. While this doesn’t completely kill the dependency of this statistic on other alliance members, it does a better job than OPR.
RNL does not set out to predict a match score, because that is a near impossible task, and I am not a clairvoyant. Instead, RNL attempts to quantitate the ability of a robot: the ability to score high and low goals, work with other robots, etc. It can be assumed that a robot with a high RNL finishes more cycles or more meaningful cycles than a robot with a low RNL.
It will not be easy to explain the coefficients with simple sounding English words. Everything is based around the value of low goal: 1. For example, a high goal is 1.21x more valuable than a low goal. Why isn’t it 10 times? While a high goal counts for 10, and a low goal counts for a measly 1, these are one-assist points. The weights in this equation takes into account different scenarios involving goals, assists, truss assists, and autonomous mode performance.
The highest weight is given to assists. This should be a no-brainer – assists, or the racking up of, is the key to winning games. Then comes the high and low goals. Autonomous mode goals are given less weight than low goals, because failure in autonomous mode won’t cripple an alliance’s chance of winning, while the failure to score during tele-op will. Truss assists get the lowest weight out of the five, because while it accounts for a nice 10 points, too many teams try to do it, fail, and end up with lost time. Time is important.
(Foul points and truss + catch are left out completely, because they are too gimmicky and rely on a good bit of luck, and therefore mess up the data.)
The weights are subject to change depending on the in game results at the competition, but it won’t deviate much. Hopefully RNL is better at its job than OPR is. Once I get sufficient data from the scouting team, I will post some RNL leaderboards (assuming I still haven’t come up with a better name for it yet), and maybe even share it with other teams at the competition.
Keep in mind this is not an attempt to replace the quantitative aspect of our scouting. As useful as statistics may be, numbers do not answer every question. But the hope is there will be a strong correlation between the RNL of a robot and the scouts’ opinions of it.
Through a combination of hard work and interdisciplinary research, we were able to develop a scouting app under impossible time constraints and construct a custom rating system in time for competition. Had we placed in the top eight, we would have came prepared to draft the best possible alliance. The final product can be seen here.
Besides the 3D card holder and the scouting app, Harsh and I hope to kick start our new podcast series called 1:1 Edu Tech Talks in the coming weeks. We hope to keep track of the podcasts through SoundCloud and embed them into the main help desk site in a similar manner to Help Desk Live. At the current moment, we are not exactly sure what our first topic will be centralized on but it will certainly be exciting.