ICML (International Conference on Machine Learning)
Alright, short blurb. ICML 2011 was in Bellevue, WA this year (Near Seattle, WA to get you oriented). Monica, Kaushik, Michael and I had a paper there. Nicely named: Apprenticeship Learning About Multiple Intentions. The rest of the post is just an overview of the conference and what I found interesting.
The tutorial on Large Scale Recommender Systems was interesting. Points to take away from it, very high level, not going into details:
- A lot goes into a recommender system. There is modeling the user, defining the problem and then measuring the performance. Each of those sub problems are challenging.
- On top of this is dealing with static/online and hybrid (explore-exploit) methods of using data. If one uses static methods then you can't react to changing users/content. If one uses online systems you can be led on dangerous paths (overfitting) and then there is the explore exploit scenario. In explore exploit you basically choose to run experiments once in a while to get new (useful) data from your system (users + content).
- So in the example of advertising, you show an ad that you don't normally show a couple of times and monitor the clicks on it and if the response is good enough your system will adjust to show the new ad more.
- Interesting point made by Deepak Agarwal was on the Myth of Abundant Data - To take from the slides: Myth: We have so much data on the web, if we can only process it the problem is solved –Reality: The Number of things to learn increases with sample size, the rate of increase is not slow, the Dynamic nature of systems make things worse
- The big example used throughout the presentation was content delivery and placement on the yahoo landing page. There is a lot going on there in order to maximize the metrics that yahoo is keeping track of. From Yahoo's CEO:
"Just look at our homepage, for example. Since we began pairing our content optimization technology with editorial expertise, we've seen click-through rates in the Today module more than double. ----- Carol Bartz, CEO Yahoo! Inc (Q4, 2009)
- Tidbit: I worked on a recommender (primitive one) system many moons ago: An intelligent multi-agent recommender system for human capacity building. Thats why a soft spot for this tutorial.
- There was a lot more on this in the Tutorial so best you get the slides: Slides
The other tutorial that was interesting was on Machine Learning and Robotics.
- This was not SLAM tutorial, it was mostly about control for robotic manipulators.
- This nicely translated to the RL view of Robotics.
- Ideas that were highlighted were for Statistical Relational Learning for Robots
- Head over to the tutorial and download the slides. Unfortunately the pdf does not have the video.
Sessions, Keynotes + Workshops
Keynote: Christopher Bishop made an interesting presentation on important applications (and their research questions) of Machine Learning (ML). He explained and demoed some of the interesting parts of Kinect system. There is a lot going on within the machine. One big challenge they had was calculating the position segmented body parts at each frame. They did not use information from the previous frame (In 3D reconstructed from the hardware) to assist in finding the body parts in the next frame. By doing this one of the outcomes is that the system would recover from any errors. So 1 bad frame does not mean a 1000 bad next frames. There is more to the Kinect system than what I am posting about here. If you are tickled by Kinect, maybe its time you get the SDK. The other application of ML he talked about was on some of his research in relations to how kids progress through allergies. Related paper : "Beyond Atopy - Multiple Patterns of Sensitization in Relation to Asthma in a Birth Cohort Study".
The best paper went to Kevin Waugh, Brian Ziebart and Drew Bagnell for their paper "Computational Rationalization: The Inverse Equilibrium Problem". This was Inverse Reinforcement learning in a multi-agent setting. The multi-agent setting introduces the notion of thinking about how the other agents in your world are going to behave. Are they going to co-operate with you or be your adversaries. Thus, game-theory enters the picture.
Keynote Martin Nowak. He presented an interesting talk on evolution and how agents cooperative-competitive behavior changes over time: "Evolutionary Dynamics of Competition and Cooperation". I even tweeted one of his observances:
Keynote: Hartmut Neven. The Google Goggles keynote we had all be waiting for. It was very good and also included little application snippets like: "Google Goggles solves sudoku". Google Image search also uses the technology now. There was a lot to take in durin this talk. Another lesson learned, good features are good friends to have in computer vision 🙂 He also then presented some of his projects on Quantum algorithms. Basically converting optimization problems into a format that can then take advantage of Quantum hardware. The future is now!
The last keynote was from David Ferruci from IBM Research. He was the Principal Investigator for Watson. Yes that Watson. I think that is enough said about that. You can catch him talking about Watson at the IBM Watson Page.
There were a number of sessions throughout the conference. The Reinforcement Learning (biased) tracks were 3, The Game Theory one can probably be thrown in to make it 4. The poster session were good and allowed us to discuss our paper with some of the academics in the field who work on similar problems.
I attended the workshop on New Developments in Imitation learning. The first talk by Shie Mannor discussed developments Imitating fighter jet pilot. We know about the research in IRL with the helicopter but then when we move into imitating fighter pilot manuvers then we move into a new similar space. This type of work really brings together researchers in multiple domains as well as experts (the pilots) in trying to solve a really challenging problem. There were other such talks in the Invited Cross-Conference talks.
The other workshop at the conference I jumped into was the "Planning and Acting with Uncertain Models". Satinder Singh asserted that using predictions about current and future states will improve planning. He then went through a high level view of his Predictive State Representation work and some discussion.
There was a whole lot more at ICML so visit the website to get more information.
A big thank you to Fulbright Science and Technology Award for sponsoring the conference trip.