Saturday

Final Project, V: Putting It All Together

       It was two days before the exhibit when we started moving very fast. We had a head, a jaw attached to it, and eyes and eyelids on it. We also had the code for everything except the jaw. We had crafts materials, such as yarn, paint and glue, thanks to Lyn who took me and Juliette to a crafts store on Marathon Monday. We also had the sound files re-recorded in Marie's voice, because... well, it was just more effective when she said it. So here is how everything came together.
       The week before Chris visited and showed us how to make the NXT play a sound file. This was great, except the NXT was not really audible even at top volume. So I asked Lyn if he could get us a microphone and speaker set for amplifying the sound. And he did! That was one problem solved. After about 3 hours filing with various instruments, I got the jaw to fit into the skull. Only then was I able to test my codes for playing the sound files. I soon realized that the software wasn't going to work before I fixed the mechanical issues. I had to play around with the gear ratio and decided on 1 to 8. I also had to figure out a way to attach the motor to the skull. I had wanted it to be in the box under the skull, nıcely hidden away. However, carrying the power to the jaw's rod was too complicated for me in that case. So, I drilled two holes on the sides of the skull and put two rods through them. I attached the motor to these rods and built the gears next to and on top of it. This proved to be quite a stable structure. Furthermore, I took off the tape that held the other two motors together and used Lego pieces to stabilize them. This took quite a bit of time and loads of help form Lyn.While I was doing that, Christine and Marie were building a paper-mache head for Wendy. Marie also found a ridicolous little nose for her on the crafts table that we later attached to her face. While the paper-mache was drying, this is how the final mechanism looked:


Sideview of the finished mechanism

Top view into the "skull"

Bottom view showing the many pieces making the structure stable
The paper-mache head drying next to the box holding the sensors and bricks
       The next day, we all came together to work. I had started to make a wig for Wendy and Christine took that over while I worked on the jaw program. I wrote the code for each phrase separately and tried different angles for different syllables. I ended up getting a code for gently opening and closing the mouth in a generic talking motion which lasted different amounts of time for each phrase. Then, Marie helped me put these in a case structure to alternate between phrases. I did not want to put them in an infinite while loop and create an unpredictable puppet. Instead, I incorporated two sound sensors into our design at the last minute. While Marie and Christine were painting the head an the face, I was getting background noise readings of the Leaky Beaker. Then, Marie and I changed the program so that our puppet would say "Come closer." when the sound level was a little over the background. We also got readings for a conversation and used the average of these values for a speech threshold. When this threshold was reached, Wendy would say "Hi!" or "Hello!" If the sound level remained above the threshold for a period of time after this, she would proceed to say "Hmmm.... interesting!" or alternatively, "Tell me more!", assuming that someone had been talking to her all this time.  At this point, her cosmetics were also done. We put her head, her eyebrows and her wig on, and she was ready to go! Well, except her gigantic head. We came up with an idea to cover this up by putting a hat on her head. Surprisingly, she looked pretty good this way, and a lot like Marie!

Wendy with her makeup on
The resemblance is undeniable!
        We were finally ready! We moved Wendy, her box, the speakers, the microphone and my laptop upstairs to the exhibit area. The microphone and speakers needed to be attached to my computer at all times to function. The speakers also needed to be plugged in. We put one speaker on each of the two front corners of our table. The microphone was taped to the NXT producing sound and stowed away under the box. The eyebrows and eyelids were set. Our sign was in front of us. We turned her on!
         Wendy did OK but was not very reliable during the exhibit. She had a couple of issues. First of all, the eye code depended on a background reading to detect the motion. This reading changed with every restart. If someone was standing in front of one of the sensors at the first second, then the eyes wouldn't function properly. We had to reset many times until we got a working setup. The talking part was also flawed. People weren't able to tell apart what she was saying. When we explained it to them, they started talking to her but she didn't respond quickly enough because of the wait we had put between the phrases. Some of her parts came off during the resetting process and needed to be fixed. Therefore, we kept taking off her head and fixing her constantly. Marie fixed most of these issue for the Maker Faire the next day. With the eyes tracking properly, she was truly creepy. Despite her imperfections, she provided amusement for a good while. We put her up in the studio, facing the back entry to Science Center so she can happily creep on many others to come.
Lyn and Marie at the end of the exhibit: Lyn is working that wig!

Friday

Final Project, IV: Making the Head

       While I was working on the jaw all this time, Marie and Christine had already produced a code for their parts. They had used 3 ultrasound sensors for detecting movement. The code determined which one or two were covered and this gave the direction in which a person was moving. The eyes would follow this direction, while the eyebrows and eyelids would change to the "alert state" by opening and rising when motion was detected. This was great but they needed to test this on the actual parts because they needed physical constants for their code based on the material we used. I did some googling to find out what the average person's eyeballs and eyelids' size were. We ordered 1 inch polypropylene balls for the eyeballs and standard ping-pong balls for the eyelids. They arrived a few days after Marathon Monday and were added to the Lego mechanism on that Friday. Christine and Marie did this by drilling a hole in two of the plastic balls that fit a rod. They also cut one ping-pong ball into to to get to eye lids. At this point, we had a preliminary structure to put the mechanism behind. Here is what the face looked like:
The face and eyes, being held by Christine in this picture
         To build the face and the back of the head, I first measured our
heads. I looked up the relative positions of facial features of an average person. Next, I drew an ellipse on SolidWorks that was the average of our faces big. I put two holes for the eyebrows and two holes for the eyes. I cut the part from the imaginary nose below for the jaw. We thought we would wire fit the face with the box structure around it, so no other holes were made. I also built a box structure, which we can call the skull, to rest our mechanism in. It was 15 cm high, 13.5 cm wide and 12 cm deep. We decided on these dimensions after looking at the Lego structures and how much space they took. This box structure was to be connected to the face at 4 cm. We printed these parts on 3/16 inch thick Delrin sheet. With EJ's help, I drilled holes on these parts so that we could put the wire in. However, when the time to cut the piano wire came, me and Lyn had serious  difficulties. There was no way to cut so many pieces of wire and be able to  locate all the pieces flying across the room in a reasonable amount of time. I hadn't wanted to use press fitting because I had found it hard to calculate and not very reliable during our bird project. However, I had no option so I got to work. Briana had been doing a lot of press fitting for the Creep and she kindly helped me understand the principle behind tight fit. I used her notes to calculate the holes and draw a new head structure for Wendy. I first printed two rectangles: one with a hole and one with a piece sticking out to try my tight fit. It was very tight and I was happy with it. I printed the rest of the parts using the following files:



Two of this part made the sides of the skull

This is the back of the skull.


The face
        The fıt was so tight that I needed to use a hammer to put all the pieces together. This was great beause it also meant that it'd be hard for them to come off. Marie and Christine taped their Lego motors and put the mechanism inside. They started debugging their code. I started to get the jaw ready to put into the skull. Due to an error in my calculations, it was about 5 mm too wide to get into the skull. While they were trying to make the eyes follow reliably, I was filing away my precious jaw...

Final Project, III: Figuring The Jaw Out

       The first thing I did next class was build a new jaw out of Lego. This was a little too puzzling to me for some reason so I decided to examine A.L.F.I. for some clues. I also read the blog on its development. Then, I consulted Lyn and finally built a hinged jaw. The result was quite simple and I was a little embarrassed bacuse it took me so long to wrap my head around it. I quickly moved on to the programming part for now.
       The biggest challenge I had also became apparent after a chat with Lyn. I had assumed that we can record ourselves talk, program the NXT using these files and the sound would come from the brick. Well, it wasn't that easy. We didn't even know if the NXT could play such sophisticated sound. We had heard it only beep and play the short start-up melody so far. Lyn suggested that I program the NXT to send a message to a device that can play music via Bluetooth. He said he could supply me with an android phone or something of the sort for playing the sound files. Therefore, I started doing my research on how to get an NXT to play a sound file and alternatively communicate via Bluetooth. By the end of the class, all I had visibly accomplished was make the NXT play a barely audible 'Is this working? What the hell!?' in my voice when İts button was pushed. That was far far away from my goal but I learned a lot of things during this class. I learned that the sound file type for LabView is rso and got the software for converting other types of sound fıles to this. I also got softwares for recording and changing sound. Additionally, I tried using text-to-speech software for creating a robotic and creepier voice but this did not work. I also started my work on deciding on phrases. The end of the class dıd not look hopeful but I knew we would figure it out. I knew we had to because Deepika and Lisa's project depended on being able to play music. Chris was out of country and Lyn said he would look into it. I remember that I left the class thinking this will be harder than I had thought.
        I decided to focus on the building of the jaw until I had more information on how to make the NXT talk. Around the same time, I sent my team mates a list of the possible phrases for our puppet and asked them for their opinion. Together, we decided on the following sentences:
Hello!
Hmmm... interesting!
Tell me more...
Come closer...
And how do you feel about that?
        The last one got eliminated later because it was too long and we didn't have enough memory space on the NXT brick to put all these files. We replaced it with a 'Hi!' as an alternative to 'Hello!'. The jaw wasn't as easy, unfortunately. Another helpful chat with Lyn guided me in building the jaw in SolidWorks. I was really excited because I found out that we could use the 3-D printer for our jaw. After a few hours, I had a part I could print. Or so had I thought. Now I had to figure out how to put an axle, specifically a Lego rod, through it. I reasoned that getting an extruded-cut on the jaw in the shape of the rod's cross section would give the expected result. I talked to Briana on Lyn's suggestion because they had made circular holes for these rods. She gave me the diameter of the hole and I used that to build a cross on a rectangle. Lyn had advised me to try the hole before I print out the jaw since it is a long and expensive process. So I printed a small cube with a hole in it using the 3-D printer. This took us about half an hour but unfortunately it didn't fıt. So I tried again, making the hole larger by 0.1 mm on each side. This did not work either. My next trial involved three pieces, each larger by 0.05 mm than the previous one. Trial 3 also failed. I got better results on the fourth trial with three pieces. The smaller two pieces could be pushed on to the axle with lots of force and the largest one fit very tightly and well with relative ease. This larger axle with 1.85 mm sides made me very happy:

Success!
      After a little resistance from SolidWorks, I was ready to print the jaw. We had about a week left and needed to speed things up. When we oriented the jaw for most efficient printing, however, the orientation of the hole changed. We didn't know if the dimensions would still be right this way so we printed the right-sized cube in this orientation to find out. The result was disastrous, mostly because of the debris that had piled up around the tip and in the bottom of the printer. Lyn cleaned it up and we decided to print it in the less efficient orientation that was sure to give us a correctly sized hole. This took a whole night. I learned how to use the base bath and put the jaw in there so the support material would melt. It looked like this before the bath:
Fresh out of the printer!
         While it was getting ready in the bath, I had other things to worry about, like giving it a head...

Final Project, II: Decisions

       The next week, we had a number of decisions to make. We needed to fıgure out how we were going to achieve the motions we wanted, what material we were going to use and who was going to do what. A fair amount of emailing and some in-class discussion led to results. We decided that we would use ultrasound sensors for detecting people and triggering eyeball, eyebrow, eyelid and head rotation. We had initially considered cameras but found out that ultrasound sensors are much easier to work with. We also decided that we would build a face out of Delrin sheet and cover it with a mask of some sort. Ping-pong balls were the material of choice for both eyeballs and eyelids. The inner structure was to be built out of Lego. Then, we discussed who would do what. Since we had three people who showed clear interest in a specific part of the puppet, we thought dividing the project by parts would be the most efficient. I was really excited about the creepy speech so I took on the jaw and speech programming. Marie was into making crazy expressions and winks, so she stook the eyelids and eyebrows. We had just watched the famous scene from the Exorcist and Christine took the eyeballs and the head. We wrote out a plan of action for the next weeks up to the fınal exhibit.
       Next step was building Lego models of our puppets. We each built an individual structure, which at the end we were able to put together for our presentation. I built a jaw that opened by an up and down mechanism using a Picocricket. I was trying to minimize the number of motors and NXT bricks we use because they are large and take up too much sapce to fıt ın a head. I also explored other options that were available to us, like servo motors. However, my model showed me that we needed a hinge mechanism, rather than up-and-down, for a more realistic puppet. This meant that I needed to be able to control the angle that my jaw would open to for producing speech-like angle variation. This eliminated the other options and made it clear that I needed to use an NXT motor. Christine built eyeballs that slided in their sockets and Marie had eyelids and eyebrows attached to them. Our model looked quite impressive at the time we presented it to the class. It also provided us with valuable information on how we should proceed. After this point, I focused mostly on my individual duties while Marie and Christine worked more together. Their parts moved in concord and they depended on the same stimuli so they wrote their code together. I only know the basic principle behind their program and have seen it only a few times. From now on, I will talk mostly about my own contribution to the project.
      

Final Project, Part I: Ideas and Presentation

      It was finally time for our final projects. We had already been tld that the theme would be "puppets". Me and Christine were already partners and we stayed so. Marie also joined us as we came together to become the only and most awesome trio of the class. We had a number of ideas and had a lot of fun discussing different kinds of puppets. We went online and looked at many examples on the first day. We saw a lot of animal puppets, some marionettes and a few sock puppets. I liked the idea of a dancing puppet and suggested we build one. I thought we could incorporate many popular songs and dance routines into our project, such as the Macarena, The Ketchup Song and TunakTunak Tun. We went through many versions of this dancing puppet before we decided that an elephant would be the greatest. We wanted this elephant to be able to turn on its bottom, have huge ears that move when there is a sound, and a waving snout. We also wanted it to be able to do the dances associated with the previously mentioned songs. This was the first of our ideas that we really developed.
                Another idea we had was “Wendy Wellesley” puppet. We wanted a puppet head that looked like a regular girl from Wellesley College. She would be a good friend who listened to you said reassuring things. We also wanted to make sure she looked like she was paying attention. Therefore, we would program her eyeballs so that they followed people around her. Her eyelids would close and maybe even wink. Her eyebrows would also move to produce expressions. She might even yawn.
                A couple of cool ideas came from Christine. She suggested sock puppets. We’d have a boy and a girl sock puppet that would pretend to talk to each other when they detected conversation. They would recognize pitch and the boy would assume the identity of the person with lower pitch while the girl would talk when the higher pitched person talked. We also talked about having a move that they would do when someone talked for too long. Another idea she had was a shadow puppet version of Wendy Wellesley in case the design was too hard to carry out.
                We presented these ideas to the class and got feedback from them. It was clear that the theme of the year was creepy robots. The following eyes of Wendy fit quite well into this theme and it received positive responses. The elephant was also popular.  As a result, we chose Wendy and the elephant as the projects we wanted to focus on.We realized that another group also considered building a dancing robot so we decided to develop Wendy further. We added some other features, suchas full head rotation, being mounted on the wall and creepy lines, to strengthen the creepiness of our puppet. We then presented our more-developed ideas to Chris at Tufts. This visit confirmed Wendy as our final project.

A Bit Of Programming, II

       During the next class, we continued with the theme of programming. This time we used MatLab for the first time. We were given a link to an online book on MatLab and several exercises to program. Although in retrospect the first four were relatively easy, it took us a long time to figure out what to do. I knew some physics and Christine Java, and with loads of help from Lyn and Chris, here are the results:

Prompt:
(The expression for calculating Fibonacci numbers is given.) Translate this expression into MATLAB and store your code in a file named fibonacci1. At the prompt, set the value of n to 10 and then run your script. The last line of your script should assign the value of Fn to ans. (The correct value of F10 is 55).

Program:
Prompt: car update that updates the number of cars in each location from one week to the next. The precondition is that the variables a and b contain the number of cars in each location at the beginning of the week. The postcondition is that a and b have been modified to reflect the number of cars that moved.
Imagine that you are the owner of a car rental company with two locations, Albany and Boston. Some of your customers do “one-way rentals,” picking up a car in Albany and returning it in Boston, or the other way around.
Over time, you have observed that each week 5% of the cars in Albany are dropped off in Boston, and 3% of the cars in Boston get dropped off in Albany. At the beginning of the year, there are 150 cars at each location. Write a script called
To test your program, initialize a and b at the prompt and then execute the script. The script should display the updated values of a and b, but not any intermediate variables. 
Program:
Prompt: car loop that uses a for loop to run car update 52 times. Remember that before you run car update, you have to assign values to a and b. For this exercise, start with the values a = 150 and b = 150.
Create a script named
Program:
Prompt:
Experiment with the following simulation of the trajectory of a hit baseball. For what angle is the horizontal  distance maximized?
                    % Model of trajectory of hit baseball. Assumes no air drag. 
% What angle maximizes the distance travelled? 

clf % clear the previous figure
hold on % allow plotting of multiple figures
angle = 30; % angle baseball hit, in degrees
velocity = 50 % initial velocity of baseball, in meters/sec
rads = angle * pi / 180;
velocity_x(1) = velocity * cos(rads); % horizontal velocity of baseball
velocity_y(1) = velocity * sin(rads); % vertical velocity of baseball
x(1) = 0; % x position of batter
y(1) = 1; % assume baseball hit 1 meter off ground
dt = 0.1; % time step for simulation, in seconds
g = 9.8; % gravitational constant, in meters/sec^2
i = 1 % iteration index
while y(i) > 0 
    x(i+1) = x(i) + velocity_x(i)*dt; % Update x position 
    y(i+1) = y(i) + velocity_y(i)*dt; % Update y position
    velocity_y(i+1) = velocity_y(i) - g*dt; % y velocity changes due to gravity
    velocity_x(i+1) = velocity_x(i); % x velocity doesn't change (assume no air drag)
    plot(x(i), y(i), 'r.-');  % display a red dot for each point, and connect them with lines
    i = i + 1; % change index for next iteration
end
x(i) % Display the final x value.

Program:


Prompt:
Below is a simple model of a DC motor.  Implement this model in MATLAB by making a simulation motor.m similar to the baseball trajectory simulation above.  Note that the Greek omega is the angular velocity (in radians/second) and the Greek tau is torque (in Newton-meters).  In one time step dt, the rotation changes by omega*dt radians, where there are 2*pi radians in 360 degrees. Also in one time step, omega changes by (tau/m)*dt, where m is the moment of inertia of the motor.

Assume that you run the model until the rotation is 90 degrees.

You may assume the following values:

V_terminal: 5 Volts
K_motor: 0.3 Newton-meter/amp
R: 25 ohms
m (moment of inertia for motor): 0.0001 Kilogram-meter^2
dt = 0.001 (simulation time step)





and then,
In a file motor_control_prop.m, develop a proportional control motor controller for the motor model from #6. Introduce a variable K_prop that controls the proportional gain. Design your program so that it runs for several simulated seconds and displays the last time t_last_bad the angle was more than 0.1 degrees away from 90 degrees. Fiddle with K_prop to minimize t_last_bad.  What are your values of K_prop and t_last_bad for the least t_last_bad you observe?
Program (proportional control):

A Bit Of Programming, I

      The next few days were spent exploring ways of programming the NXTs and MatLab. There were a number of project we did during this time. The first was getting our motor to move to exactly 90 degrees. I think we might have done this before we made the ramp climber. In any case, here is what we did: We wrote a program in LabView that said go forward, wait for 90 degrees and brake. We also had a program that allowed us to read the rotation from the NXT's screen. We observed that, due to the time it takes for the NXT to realize it has gone 90 degrees and put the stop command into action, it always moved more than 90 degrees. Therefore, we changed our program so it would move back when it overshot.

Our code for moving exactly 90 degrees

      Next, we were to build a program that modelled the NXT motor controller. We used the NXT's data logging feature to collect data. Then, we put this data into Excell and got line that predicted the motor,s motion.

The data from the NXT


Here is our program for the NXT motor controller:

Here is the final graph we arrived at:

Building With Legos, IV: To The Right, To The Right

       Our next project did not involve any sensors: we were to modify our robot and our code to make a vehicle that can climb the wooden ramp, which magically appeared in the front part of the class that day. To be quite honest, I do not remeber much about this day. I was quite sick and pretty much useless, especially wıth the programming. I wasn't able to think very clearly, so I tried to help when we had mechanical roblems. Briana wrote the following program. It is my understanding that the wheels were to move at the same time and same speed so that they would stay on the ramp and not stray to one side or the other.
The climber's code

       Unfortunately, our robot had a few problems. At first it didn't have enough traction to go up on such a slope. I believe we solved this problem by adding weight bricks on top of our robot. When it did climb, it seemed to like to go to the right. We checked the bushings and the rods but couldn't find the sorce of the problem. We also changed the tires but to no difference. Chris and Lyn went over the code but they couldn't find anything wrong with it. At the end of the class, we decided that there must be something different about one of the motors that caused a non-uniform motion and left it at that.

Building With Legos, III: Line Follower Part 2

Next two classes were spent modifying our line follower to get smoother motion. We learned two new ways of control: proportional and derivative.
                Proportional control required our vehicle to change its speed according to how far off it was from the line.  This was done by having an optimal light value determined experimentally for the line we wanted the vehicle to follow. Then the difference between this optimal value and the current light reading of the sensor would be taken to get our “error”. This error would be multiplied by a “gain”, a value we used to adjust how much we wanted our speed to change, basically a factor. Then we added or subtracted, based on which wheel it was, this value to the base power of our wheels. This is how we proportionally controlled the speed. The end result produced differences in the speeds of the two wheels, which controlled the size of the sweeps. The goal was to get a critically damped system where the vehicle’s acceleration is right and requires a few oscillations to reach the goal value.
                Next, we wrote the code using mathematical signs on LabView and started adjusting it. We first had a power of 25 percent, edge light value of 34 and gain of 1. We found that 25 was too fast for turning angles and 20 worked better. We also decided on a gain of 3 after trying 1 through 4. This gave us smaller sweeps instead of the jerky motion we observed. We also found that the light values were very sensitive and 35 worked better as the edge value. Our robot worked pretty with this new program. However, we didn’t feel like the proportionality made such a great difference for our little line follower. Lyn suggested we try the white table with the black tape. We had been using the yellow masking tape on the grey floor. This didn’t provide as great a difference between tape and ground as the table and electrical tape did. We had already observed that even a difference of one means a lot to the robot in terms of following the line and a greater difference would make it much easier for the vehicle to function smoothly. Thus, we agreed to move to the table and modify our program to work there. Our new gain was 0.8, our power 17 and edge light value 39. This produced a great line follower that stayed on the line (and the table) as long as we watched. We were really proud of our little guy. Sure, it had some problems. For example, it only functioned on the outside due to our wheel programming choices. Its sweeps also allowed it turn only so many degrees. However, it worked well within its limits and we were happy with it.
Our code for the proportionally controlled line follower
                Up next was another sophistication: derivative control. Using derivative control, we could get our vehicle to move faster without getting large errors. We did this by adding a shift register to our program. We then took the difference between our previous and current errors using the shift register and adjusted our vehicle’s power based on this difference. This constituted a closed-loop system because we were using feedback from the system to adjust it. Before we applied these principles we learned to our line followers, we built a small light arm that stayed at the same angle as an exercise. We attached alight sensor to a rod and the rod to a motor. We programmed it so that no matter in which direction you push it, it will come back to the level where the light value is 40. Here is the program:
Light arm's code
Light follower from the back
                Then we started to use these new ideas on our line follower. Now it was to follow a light at the back of another vehicle. We modified our car so it had a light sensor in the front of it and a small light at the back. The location of both of these was up to us, which made it a variable among groups. So we paired up with another group to get a reading of the light value of what we thought as a good distance from their back. This turned out to be 30. We added a derivative control to our light arm program (we hadn’t had time to do so, it was just a closed loop) and changed the light value to 30. When we tried out the program outside on the carpet, our vehicle had trouble following. We came back in and tinkered with it a little. Turning of the light of the sensor seemed to solve the problem.  Our robot ended up working very well, functioning as a part of even a group of three.



Light-follower's code
 

Building With Legos, II: Line Follower Part 1


                Our next assignment was to build a Lego vehicle that followed a tape line on the floor. We re-partnered up, and me and Briana got to work. The first part of this assignment was to build the vehicle. This time our robot could have wheels, which made things much easier mechanically. However, when we added the light sensor, we quickly discovered that we needed to place it high so it doesn't get in the way of other things like motors and wires from the motors. This made our robot a bit unbalanced and it was also somewhat hard to maneuver. We had a 2-motor limit so we thought adding a third wheel in the front was out of the question. However, a little exploration led us to a small ready-made front wheel mechanism that could be attached directly to the NXT. This addition made it easier for our vehicle turn but it also presented some difficulties of its own. It wasn't getting enough traction so it was ineffective at first. Also, our first programming efforts revealed that our light sensor was too far away from the ground to get an accurate reading of light values. We changed our design to fix these problems and, with Lyn's help, got a well working robot. We learned that before you can program the robot, you need to make sure that it is physically capable of doing the motions you ask from it. Mechanics first, intelligence second.

First Robot
Line Follower.2

 

Our First Code

          Then, we started to think about ways to make our robot find the line and follow it. We first wrote a code that shows the light levels on the screen of our NXT when we put it on the ground. This told us that the tape was 37-40. The floor was darker in color and the readings were 31-34. We decided to make one of the wheels always go forward and the other go forward or backward depending on the light value. The idea was that the robot itself would turn in a circle until it found the line reading of 38, then it would keep going until the reading changed. Then the circling would begin again. This seemed like a reasonable way of moving as the line would be never too far away. However, when we actually put our vehicle on the not-so-straight line we prepared, we saw that we had a few problems. Firstly, the robot needed to be placed very close to the line or it would never find it. The light sensor had to sweep the line when it rotated. This made the line following a very small part of the vehicle’s total movement as it kept turning and turning. We also realized that even though it did scan the line, sometimes it would keep going for a second too long and miss it, due to the time it takes for the program to be executed. This was also visible when the line curved to one side or another. One side was easy to detect, while the other took a complete rotation to find. All of these difficulties called for adjustment. 
Our Code for Reading Light Levels

                We were puzzled on what to do next because our program had worked but not in the way we wanted. There was no bug in the programming or an error in mechanical design. Our idea was not efficient. We needed to come up with another way of finding the line and following it. We talked to Lyn, which led us to be convinced that a sweeping motion would be the best way to move. Almost everyone in our class was already using this technique and it was working for them. We agreed to try this method and met outside class because we had to have our line follower functional by the next class. We wrote a new code using the light value at the edge of the tape, 34. We made our vehicle move with the sensor on the edge of the tape. It would constantly go a little right and then a little left and then right again, by the alternation of forward and brake on both of the wheels. The motion depended on the light values: the vehicle would change direction when it realized that it was off of the tape. Our robot wasn’t bound to the line anymore; it would keep sweeping and going forward until it found a line when placed somewhere distant. However, we observed that the line following didn’t work if the robot approached the tape from the right side, due to the starting moving/non-moving designation of the motors. We had to keep this in mind when we put our robot into starting position. One other change we made was to lower the power so that our robot had time to respond to the light values from the sensor. This produced a much better line follower.


Our code after the adjustments


Tuesday

Building With Legos, I: Little Monster

Lil Monster Chillin on The Desk
   
       After our bird presentations, we re-partnered to start working on a little Lego monster, not the Lady Gaga kind, but one that can walk without wheels. I found this class very challenging as I had no experience with Lego or programming whatsoever. These were the two main goals of the class: build a robot using Legos, NXT brick and motors and control it using LabView.
       At first, we had a look at what kind of parts were available. After many trials with different parts failed, we decided on using L-shaped (or more like J) grey parts to create a structure that covers a circular path. These rods were sticking out at different angles so that when our robot moved enough not stand on one, it was supported by the next. This made it look like our robot had claws. Unfortunately, it was quite hard to make these parts stay at the desired angle when the weight of the robot applies pressure to it during its movement. To deal with this problem, we sought help from Lyn and we learned how to use bushings to make the rods stable. We then used LabView to program our little monster. We had a touch sensor and we used the input from the sensor to make the robot go forward or brake. It was very exciting to see our monster move!
        This was only the initial step towards many challenges we would face with Lego structures. Next, Chris announced that we were going to have a race. The goal was to get form from point A to point B. That simple. But not really. We programmed the monster so it would go forward forever, using an infinite while loop. There was no guarantee that the robot would endure that much walking or go straight. All the robots lined up and we pushed the buttons. Our monster was doing OK, going a little too much to the right before it bumped into another group's robot. Their wires got tangled up and and lil monster fell apart. It was a heart-breaking scene.
         Our next challenge was to make the robot go when it sees the white tape on the carpet and make it stop when it sees the tape again. Or alternatively, make it stop before it hits the Lego figure at the end of the track. There were a lot of sensor we could choose from for this purpose: sound, light, ultrasound. Since the tape and the carpet were very different in color, we thought light sensor would be easier to manipulate so we chose that one. At first we attached the sensor to the wrong port and LabView was trying to get input from another port so it wouldn't work. After this problem was fixed, our robot did work. We didn't get exact readings of the light values but used "lighter" and "darker" to program so our robot didn't move as we expected. With minor adjustments to its mechanical design, it moved straight up to the figurine and stopped at the tape. However, it started to move again and ended up hitting and killing the little Lego guy. It is called monster for a reason. Then we had to take him apart as it was the end of the class...