HACK@LRN May 2016 – Learnosity Hackday
Recently we held a 24hr Hackday here in our Sydney office. With our entire office split into teams of 2-4, the 24hrs was dedicated to frantic coding, animation, Asimov’s laws and establishing voice recognition… or at least optimistic signs of acknowledgement.
Pitches were held where people brought forward their ideas to garner support and everyone had a week to organise themselves into a team. There were no rules or themes to limit creativity, only 1 guideline: For the betterment of our team, office, company or product.
The people’s choice for overall best, Learnosity Animations, showed us just how versatile our product truly is, creating a living storybook assessment aimed at maintaining youngster’s attention spans and completely disguising our product in a very attractive wrapper. Read all about their idea here.
The following are each team’s take on their idea, the ups and downs, and any new skills or knowledge they emerged out the other side with.
Our team set out to provide the Sydney office with cold beer on tap, but also provide some level of keg monitoring, so we would never run out of beer. To do this we planned to build a browser based dashboard that would allow us to see who was using the keg via facial recognition, as well as monitor the mass of beer remaining via an embedded weight sensor from a converted WiiFit balance board.
Time was definitely our biggest issue, we ran into some problems with accurately identifying a pour event with the scales, this then caused a knock on effect and while the dashboard and facial recognition pieces were ready, we didn’t have enough time to integrate them as well as we would have liked or tweak their behaviour. We did successfully deliver a working fridge, and a live dashboard that was able to display pours and showed the quantity of beer remaining. We learnt that having a plan B, time management and communication are critical in projects like this.
Midi Question Type
To explore the potential of Chrome’s new support for MIDI on the web, we connected a MIDI piano to Learnosity’s custom question type. Our demo can record a melody played on the piano and notate each note in real-time on the screen. Our notation uses the abc format (a kind of LaTeX for music). A student can attempt to replay on the piano the melody, and we’ll validate whether they played it correctly. We didn’t get time to record and validate the duration of notes, but Piano Hero for the browser isn’t too far away. =)
Slack Bot integration with Jira
Our project was to build an open-source Slack bot which integrates with Jira. It has a matching engine to post activity from Jira to various Slack channels by team, project, etc.
The bot was written in Go and deployed on our internal infrastructure using saltstack. We used travis-ci for automated testing.
The code is MIT licensed and available here in Github:
We had some issues around the complexity of Jira’s APIs and some disparate documentation. There were also some challenges because some of the data we needed the system to understand was only semi-structured.
The project was specced out already, so the flow was basically to:
- get the bot working with both the Slack and Jira APIs
- implement a configurable matching engine so that different teams and individuals can configure the bot as they see fit
- write build scripts and Salt config to deploy the bot
We’ll continue to develop the project by adding richer Slack message formatting and some smarter grouping of events. We also want to add support for the Confluence activity feed.
Coding Question Type
Every now and then, a heated argument will emerge on the Learnosity #development channel in Slack. Someone has found an interesting algorithm problem on HackerRank and has challenged the rest of the company to solve it. The winner either gets kudos or emoji groans depending on the elegance and alacrity of their solution (and the number of bit shift operators/tildes used). As the competition heated up, we decided it was time to create a leaderboard.
This question type has potential in a variety of applications:
- internal tournaments and contests
- technical recruitment
- teaching programming skills (when combined with Adaptive Assessments
- assessing programming skills (in examinations)
Amazon launched a product called “Echo”. It is a smart hands-free speaker and comes with basic command recognitions. We want to extend the ability of Echo so that we can use it in our office. We decided that we want to be able to do a build release process and send a Slack message via voice control.
On the evening of day before the hack day, we started familiarising ourselves by doing some reading on Echo docs and ensuring the feasibility of what we are trying to do.
On the next day, we would be ready to work. While the basic Alexa – Lambda – skills integration was fairly straightforward to implement, we found that getting Echo to understand our intent was a bit tricky, ie. dealing with different ways of describing the same action and nicknames for people in the office.
We learnt that integrating your APIs with voice control is that easy and look forward for everyone in the office to integrate more fun things in the future!
The RoboAlan project aimed at creating a small non-autonomous robot for our remote workers (or colleagues from our other offices) to experience a level of telepresence in our Sydney office. It would consist of a small electronic cart with a tablet display to allow two-way realtime communication over the web via a browser interface, and remote control of the cart to roam around in physical space.
RoboAlan was a project idea that had been kicking around the Learnosity office for a few hack days, initially suggested by our Senior Systems Engineer, Alan Garfield (the resident remote worker of Team Shiba), and vehemently championed by Team Shiba’s project manager, Kirsteen Eydmann. It’s first iteration was created by hackday team “RoboAlan” consisting of Phillip Broadbent, Kirsteen Eydmann, Andrew Morrison and Karol Tarasiuk.
Phillip and Karol worked tirelessly on the WebRTC browser-to-native-android implementation, which consisted of a browser application that pushed webcam and microphone streams to an android app for communication, and had web controls to push directional commands via the WebRTC data channel. Similarly, the browser app received a live stream of audio/visual data coming from the android tablets camera and microphone.
Kirsteen and Andrew deftly constructed a kit-bashed cart that allowed for bluetooth controls from the android application, such that, all the WebRTC data channel commands coming in over web would be quickly pushed to the cart over bluetooth.
Many issues were faced, worked around or headbutted righteously on the day, not limited to working with the experimental draft specification that is WebRTC (or more specifically, libraries and packages that need to constantly evolve alongside it), electronic component failure, and prototyping a cart with a short list of available materials. The team successfully completed the first iteration of RoboAlan on the day, and are continuing to evolve it in spare moments.
We’d love to hear what you think about our hackday ideas and projects! Follow us on Twitter and let us know!
We’re looking for talented people to join our team so if this excites you get in touch and hack with us.