This is the group page of Team Pill Dickle.
Our team name is intentionally spelled that way, because "Team Dill Pickle" didn't vibe with us.
- Arjun Menon
- Mebaa Kidane
- Chris Julian
- Fei Tie
1/27/11: Through freak chance, we have successfully run Migio on WinXP with VisStudio 2010 on a loaner PC. No team members own a 32 bit machine though. Looking into virtualization…
1/28/2011: Succesfully virtualized windows 32 bit with VMware and installed drivers for Pleo and also got VS2010 running. Will try running MiGIO soon.
2/14/2011: Used Myskit to walk the full length of project 1's square.
2/17/2011: The demonstration didn't go that well. Problems encountered were only with the image fetching and processing using the webserver
2/19/2011: Successfully wrote the report with good input from all members.
2/21/2011: Where is project 2?
?/??/????: Project 2 went off without a hitch. We managed to track the fruit and our position in real time and built a very basic linear controller to decide how to move towards the fruit given bearing and distance. We built a blue helmet for the robot and left the body untouched. Using opencv color detection we searched globally for pixels that are green and then found their centre of mass. The same was done for red and blue. Thus we knew the head, body and apple locations. Then using vector mathematics we created a head-to-body vector and a apple-to-body vector. The magnitude of the HeadBodyVector was used to scale all distances since it is the only invariant in all set ups. Using cross products we are able to use the vectors to calculate the the dinosaur is facing to the left of the apple, to the right or dead on. Using these two numbers of bearing (direction) and distance we wrote simple controllers that would minimize both of these values. Our motion primitives we turn right, left and walk forward.
Project 3 involved manipulation and and obstacle avoidance. We were confident of tackling obstacle avoidance and we achieved this through simple A-star search. We used background subtraction to identify obstacles, and then once the apple is added we perform the same red, green, blue global centre of mass calculation to pinpoint the head, body and apple. A-star search was encoded to use a gridded up version of the world, and was built to prevent it from opening nodes that were considered obstacles. The old linear controller was used to move the pleo from node to node. Every time he attempted to achieved a new node, he would replan a new path, to prevent the errors in control from compounding. Unfortunately, the resolution of the pleo's movement was a severe impediment to the motion of the robot. Finally, we were unable to demonstrate the manipulation planning portion of our project, since it was unable to reach the apple with an orientation amenable to engaging the actual return and manipulation. It mostly walked around the apple knocking it around unfortunately.
4/25/2011: The Final project is so difficult. It penalizes us very hard for relying on colour information in the previous objects, and with the introduction of colored opponents, we anticipate huge problems with localization. We are addressing this by using initially provided user information about the pleo's location to create a history of it's locations and it uses this to search locally (in pixels) for the component markings and colorations that are specific to our robot and thus localize using this information.
It would've been nice to have had some advice on these projects from people who have worked on this before, or even hints to help us not get stuck on dead ends. It wasn't readily available to us but this entry in this wiki page is being written so that some poor soul doesn't make the mistakes we did. If you have a Pleo please tackle these problems well in advance:
0) Get Visual Studio, OpenCV and Windows 32bit. Make sure windows 32 bit is natively installed, because vmware win32 is slow as hell. It seriously helps if you have a laptop with a better-than-average graphics card. Secondly, there are ways to partition your harddrive in windows 64bit so that you can get a 32bit copy of Windows on the other partition. (AB)Use MSDNAA access for this. Having more than 1 computer able to compile and run code to the pleo is a luxury.
1) OpenCV usage towards identifying WHERE the robot is, based on color information AND historical data about it's position. You need to know how to track reliably. Blobdetection was not used by us, and therefore we have to hardcode a lot of our own stuff. In hindsight, important functions may not have been leveraged in the process.
2) OpenCV mouse events and keypress events are pretty bad; address this asap and make sure you can get images reliably and without fudging up the events
3) OpenCV image fetching: This burned us on the first project, but it's imperative you know how you're going to fetch images and how you're going to SAVE these images. You need to be able to fetch from something like WebcamXP 's image server feature from the IP address.
4) Know A-star search like the back of your hand. It is important you create SEVERAL test scenarios for your own A-star search algorithm and make sure it works for all obvious grid worlds
5) MySkit is terrible, avoid like the plague
6) Manipulation is the hardest part of planning