Tuesday, September 15, 2009

On Requirements

I like looking at job postings. It's kinda like shopping for a new TV. 'Oooh, this one is for designing analog circuits!' instead of 'Ooooh, this one has three HDMI ports!'

But they're still funny, funny things. There's so many problems with how people are hired nowadays. For instance, I rarely see jobs that ask for less than 5 years of experience in... something. Whatever the job is about. Want a job making circuits? Five years experience in making circuits, mandatory. Your resume is not even considered if you don't have five years of valid work experience. Go work for $random_big_corporation with a well-known name developing circuits for five years and we'll consider you.

What exactly is the value of five years of 'professional' work experience? Surely you have a number of designs under your belt, a list of components you use, probably a few contacts for samples, support, etc? But if you've been hacking together circuits since you were five and have a web page full of stuff you've done, but only two years of work experience then you're not supposed to apply. What does the five years working for a company guarantee them? Certainly not quality. Anyone can remain employed with a certain title and be bad at their job. I see plenty of people like that, but when they leave they'll put right on their resume - '5 years of bad designs, rework, lost profit and profanity. But my title was analog design engineer the whole time.' That guy wins.

And what's with the specific skills? 'Must use Eagle for schematics and layout.' Ok, I'll learn Eagle. Not my preferred solution but I'll do it if it gets me a job. Oh wait, it doesn't say that. It says 'Proficiency in Eagle schematic capture and layout required'. What? When you're looking for a mechanic do you put 'Must have proficiency with Craftsman brand crescent wrenches'? I've used plenty of schematic capture programs and none of them are black magic (except maybe PSPICE. Grrrrr....). I'm sure I can use Eagle very well if you give me a few hours to play with it. Heck, I can DOWNLOAD it and play with it - I'll get it done before I come to work for you! The tool is not supposed to define the job. If anything, arbitrary requirements only diminish the talent pool of people who can help you. (Note I said 'arbitrary' requirements. Some requirements are sadly justified.)

So your resume has to have a bullet point - 'Created 12346463113 circuit desgins for a toaster using Eagle'. Perfect. That gets past the filter. Why do the tools matter so much? True, if I'm going to work for a company that's doing a large program in C#, then I had better have worked with C# before. But that's what coders do. Code monkeys. People who get specifications in and produce code for output. Then a freaking computer checks their work with a unit test and tells them if they did it wrong. These people are automatons. Their job requires almost no original thinking. Engineering is supposed to be about creativity - finding solutions to problems. I always thought I would be sat down and they'd say something like 'This hyperdrive keeps overloading when we turn it on. We need to reach the Tok'Ra in three days before Apophis comes in his mothership and beats the crap out of us.' To which I'd say 'I'll put my engineering mind to work on that problem right away. And I need to work with Major Carter. Alone. Naked.' What's NOT supposed to happen is for them to then say 'Oh by the way, it all has to be done in Lisp. You have 5 years of Lisp experience, right?' Whatever happened to the right tool for the job? You know - how it's supposed to make things easier/cheaper/faster?

And don't you love how every job has its little 'niche'? This position is for an automation engineer, not an electrical engineer. This one is for an embedded application developer, not a software engineer with a specialty in embedded systems. The proposed scope is so small it's like it's not even worth it. Even if you have superior experience it doesn't matter.

'This position requires experience with PLCs'.
'Oh, those are microcontrollers with a whole bunch of electronics. I could design one of those. Can I have the job?'
'No, you just said you don't have any experience with PLCs.'

Or better yet, your similar experience is worthless:

'I studied to be a controls engineer but I think I fit for this DSP position. DSP and controls stem from the same theoretical background and I've also had extensive training in numerical methods for computer systems. Also, I created an embedded sensor platform that used DSP, so I have practical experience in embedded programming and DSP. Can I have the job?'

'I'm sorry, we're not hiring controls engineers.'

If I had to boil it down to one question it would be:

Why are businesses afraid to hire anyone but the person who exactly fits?

There are so many modifiers on these job listings that you'd think they had millions of people to choose from and they could afford to be picky. Ideally on the internet they do, but then they do stupid things like say 'Local candidates only' or refuse to pay for people to come out and be interviewed. Or not pay for relocation. Or not help a spouse get a job. So you're limited to whatever people can be found at hand. And they're not going to find their perfect match. I have friends who look for mates like this. Yeah, they're single. And they're missing out on a great part of life.

While it's true that companies cannot afford turnover, they also cannot afford to delay hiring for too long. If a job sits, the work doesn't get done. Or you overwork the people you do have which leads to turnover. All out of an.. ideal? Is it a value nowadays to not train people? To not give people a chance? Not to invest in them? To be picky? To not try new approaches, hire people with new skillsets?

While I can't fathom the reasons that businesses follow these hiring practices I can tell you that it is costing them people. Good people. Specifically, young people. Comparatively untalented, inexperienced good young people. These hiring practices are biased against those who haven't had the chance to develop whatever random skill that is in vogue nowadays.

But that's ok. Soon the baby boomers will retire. And then they'll have to be less picky.

Tuesday, September 8, 2009

On Being 'Well Rounded'

Following up my previous post on how well-rounded I believe I am, I am realizing that being well-rounded is not good. At least, not in my position. Let's consider my position and see why being well-rounded but not deep in any particular skill is not good.

I work for a large company and it employs many people. MANY people. We have all kinds of engineers, technicians, secretaries, project managers, etc. If it can happen on a project chances are we've seen it and hired for it. Need someone to create a schedule and manage a budget? We've got project managers and they've got secretaries. Need someone to design an antenna feed? We have seasoned RF engineers. Need someone to program an 8-bit microcontroller, create interfacing electronics and build an enclosure for everything? Well, no. We don't need one person to do that when we could have three separate people do it. People who excel in their individual areas and can get the job done quicker and better than one person.

Let's be serious - you wouldn't combine all of those tasks into one person unless it was absolutely necessary or desirable. When might those confluence of events happen that make it necessary or desirable? Let's tackle necessary first. It might be necessary to do it with one person because that's all you have. You either can't find people who are skilled in those areas or you can't use them if you did find them. The skillset might not exist in your company if it's small and might not exist in your area if you're in a small town or other backwater type place (like most of where I grew up...). So you use what's available to you and hope that it can get you by. And it might be desirable. You could contract out the work but then you have communication issues and overhead costs and such. If the work isn't particularly difficult then it might not be worth it to bring new people on even if they could get it done in half the time. So a wide array of basic knowledge could be useful if you couldn't hire more people and didn't care too much about time lost and/or wasted.

But at a large company those don't generally hold true. You usually have a ready supply of many differently-skilled people. You probably have at least two people to choose from who have your skill, so you'll always choose the person who is better at what you want to do. Then this person gets more experience and hones his/her skill and eventually becomes an expert. Then, unless this person is unavailable, you will always go to this person if you need his/her skill. In the situation where two people know the skill but have unequal levels of experience, the more experienced person always wins.

And time is usually very important at a large company. There's never enough of it. Competition is fierce and to maintain market position you have to be timely. Contrast with a smaller business: there is little/no market position to maintain so releases of products can be more easily delayed without losing too much money over it. Not that deadlines aren't important to a smaller company, but if they let a deadline slip they are likely to lose less money than a large company, even as a percentage of income.

Both of these factors contribute to drive people to specialization in large companies. With so many people being available, you have to set yourself apart to make an impact. In some circumstances intimate knowledge of a project may make it more worthwhile that you perform multiple tasks with different skillsets yourself. Or they could just put the person with the right skillset underneath you and have you lead him/her.

That's the major difference: A large company is more able and more likely to trade labor for time, where a smaller company doesn't have this luxury. If you aim to be a jack of all trades, shoot for a small company that can't support many people.

Update: I've done a little thinking and I believe there are a couple more things to be said for this line of thinking. 1) The 'time first' mentality tends to push people towards specialization in skills and it also pushes entire organizations towards specialization in skills. If you know Labview and you can do the job in Labview, chances are you'll do it in Labview. So a C# programmer is going to have a hard time. Over time the company becomes a monoculture and tries to apply its one major skillset to all problems (for good or ill). 2) The 'time first' thinking need not be limited to skill sets. If you can buy an off-the-shelf product that does what you need then it makes no sense to build it in-house. You're essentially buying the skillset you need ready-made. This also can lead to a monoculture of suppliers - with certain businesses being preferred over others because of past experience. In and of itself, buying this experience rather than 'growing' it in-house is an acceptable tradeoff for certain decisions but definitely not all decisions. 3) The monoculture induced by either of the above methods can kill a business. If this trend moves forward to its logical conclusion (which is admittedly far-fetched) then you'll have a handful of experts skilled at using a small number of skills/products, or you'll just buy everything from someone else. You run into two problems with this: what happens when your extra-skilled engineers retire/get hit by a bus/find a better offer? Brain drain. What happens when all you do is buy other people's products and put them together? Brain drain again - you now have people whose sole skill is figuring out what other products to buy and re-sell. You're no longer an engineering firm - you're at best a project management firm and at worst a re-seller.

Neither of the above outcomes are guaranteed to happen. There are many other pressures that will come to bear on an organization before this brain drain becomes absolute. However, it may still become terminal before it is absolute and the organization may not survive. There are two ways to combat brain drain: First, never outsource (directly or by means of buying COTS products) your core competency. If you write e-commerce software then outsource your server management, not your programming. If you build Arduinos then outsource your board fabrication, not your board layout. If you do that then you are paying people to develop their skills at the expense of your own. Then, when they are the experts they have no reason to keep you around, or at least, no reason to let you have a big slice of the profit. Second, you must, MUST invest in pursuing different/new technologies and you MUST invest in training less-skilled people. If you have a choice between two people of different skill levels, consider choosing both. Split the work up along skill-level lines and let the less-skilled person cut his/her teeth on the easy stuff and let the more-skilled person handle the hard stuff. But keep them together - working together, seated together and give them the same access to resources/people. Chances are the less-experienced person is younger than the more experienced one and will be around longer than him/her. You'll need the same level of experience if not more in 20 years time, so plan for it!

It is less efficient to make these investments in lower-skill areas and people, but unless you plan for the future you could find your company on the chopping block.

Wednesday, August 26, 2009


I consider myself a fairly well-rounded engineer. My specialty is control systems but that's in name only. It doesn't say embedded systems on my MS diploma because I had already taken all of the embedded classes for non-grad credit. It says communications on there because I sat through all of the comm classes (and got A's in them) but even though I know a lot I wouldn't consider that my bag. It doesn't say 'math minor' on my BS degree because I was two classes short (I decided to get a head start on my MS instead of the math minor). I've been programming since I was 6 in BASIC, C, Perl, Python, etc. That doesn't make me a CS major, but the years after that of learning to use source control, taking OOP classes, using unit tests and documenting with Doxygen makes me at least a friend (you guys will be my friend, right?). I built robots when I was a teenager and learned the basics of electromechanical systems as a controls engineer, so I'm claiming some mechanical experience as well. And I can speak German.

But I find as I grow older that some areas I just don't know about. Worse, I think I've built them up so that they seem so difficult that it's not worth it to even try to learn about them. TCP/IP, networking, ethernet - all that is something I've been interested in but thought the barrier to entry was too high. I've fussed about around the edges - I can tell you that the ethernet physical layer uses differential signal and (I believe) a 350MHz signal. I've done web programming and that introduced me to ports and such, but I don't take the time to go and just learn the stuff. I would start and I'd see the 7-layer OSI model and my eyes would just glaze over. Who the hell needs 7 layers (unless it's lasagna)? What are they cramming in there that it NEEDS 7 layers. Is it really this complicated to send data across the internet? I deal with microcontrollers and RS-232. I can send data that way, but for some reason people see the need to make twittering power meters out of the same microcontrollers. Other than being just a foolish idea it seems way too complicated if you have to interpret every layer of this 7-layer monstrosity just to do something as asinine as say 'Your refrigerator is now using 78 watts of power!'

So I figured I wouldn't deal with it. Who needs internet connectivity? I'll get along just fine. But still, it vexes me. It's something I don't know. I have a friend who does all this sort of stuff, makes routers in his free time. So I ask him 'What do you DO? What do you know that I don't know?' And he replies 'I can tell you exactly how a machine responds to a ping request. Where the message goes, where it comes from, what happens. It's complicated but I know it.' Basically, he knows how the OSI model works, what each level does, what happens. And the trick is it's mostly programming after a point. But not straightforward programming: complicated CS-type programming. Layers of abstraction built upon layers of abstraction so in the end you don't know that you're dealing with bits and voltages. You see, software people are TERRIFIED of bits and voltages and they will do anything they can to abstract those away. They have sacrificial programmers that write drivers to interface the bits and voltages to sane, reasonable things like floats and strings. They'd have a minor episode if they saw a character instead of a string. They NEED the 7-layers of the OSI model to be able to effectively ignore the basic truth of what's happening: bits are being represented by voltages. By the time you get to the 5th layer you're dealing with all kinds of structs, connection IDs, sessions and voltagephobia so that you don't know what's going on. If you're me that is.

If he tried to explain it all I think I wouldn't have understood. He could have explained it in Elvish and it would have made as much sense. So I ignore it again BUT IT STILL BOTHERS ME. Especially since I designed what I thought was a very complicated, nice and worthwhile messaging system for microcontrollers built on top of RS-485. It had circular buffers, message queues, priority, timeliness, guaranteed message delivery and all that jazz. It was complicated but I just started at the bottom and worked up. Pretty soon *I* had a few layers just to implement serial communication between a couple of devices. You had to have addresses and ACKs and collision sensing and much more to handle all this. It gets complicated quickly. Even when you start simple you always have the edge case. "Oops, what about this? Better add an address field. Oops, better add parity. Gotta ACK that for sure.."

It adds up. So I thought I might take another crack at TCP/IP and you know what? It turns out it's a bunch of things I already know. Just more queues and buffers and ACKs. When a ping happens (I think this is how it goes...) it's sent to the right computer (as determined by its IP address) where the network layer reads the message, finds it's an ICMP ping packet and puts the information in a special struct then pushes it into a queue for the ICMP handling process to take care of. When it gets around to it, it creates a response packet, tells the network layer where to send it (what IP address) and puts all that data into another struct and pushes it onto a queue for the network layer. That's it. Not too complicated - just data structures, processes and lots of teamwork. I learned something about TCP/IP.

TCP/IP is a complicated system, there's no question. But it does a LOT. It can reliably send data halfway around the world to a computer you didn't know existed in a few seconds. That sort of success is built up on layers and layers of success - each layer being tested and battle-hardened until it has so many special cases that it looks like an insane asylum. But once again - IT WORKS. Simpler systems would not. I don't begrudge the creators of the magical internet their seven layers anymore. It's not simple but it's not incomprehensible. It uses tools and methods that I was aware of - I just wasnt' aware of how they pieced them together. Now I am. The moral of the story is that if there's something you want to learn or need to learn don't think that it's impossible just because it's complicated, and don't condemn the creators of the system just because it's complicated. Just take it slow, re-read a lot and eventually it will make sense. Eventually.

Thursday, August 20, 2009

Content or Not Content?

That title up there is actually a clever double-meaning. I'll get to those meanings in a second. Suffice it to say I'm smart and it makes me feel smarter to withhold information from you for a few minutes longer. It won't be terribly long, so just bear with me.

I've often heard that writers can't write if they're happy. After all, how boring is happy? Have you ever imagined yourself perfectly content with not a care in the world? How long did you keep imaging that before it you moved on to something more interesting? It takes me about five seconds. I imagine myself sitting in a chair with a smile on my face and... then I repeat that. Forever. That's dull. Writers know this either consciously or subconsciously and strive to keep themselves aggravated. After all, writing is about conflict. Protagonists, antagonists, twists, man vs. man, man vs. nature, man vs. God, etc. So it doesn't do to live a life of contentment because the more conflict you're familiar with, the more you have to write about. Those writers make good books (in case you didn't catch it, the first meaning of the title was 'content' as in 'the opposite of conflicted').

So perhaps I can take it as an indication of my level of contentedness that I don't write here much. That's actually incorrect or at least a confusing argument. For instance, what kind of content could I put here? I started this blog to detail cool ideas I had and hope that other people would say 'wow, that's neat! I didn't know you could do that!'. I have some of those. From time to time. But my day-to-day work isn't exactly to work with super-awesome ideas all the time. It's somewhat hard to breed super-awesome ideas ("Hey guys, let's make robots with neural networks that train themselves to efficiently compete for limited resources against other robots!") when the only ideas you work with are super-pedestrian ("Hey guys, let's use Labview for test software again! All right! Just like the last 50 times! High-Five!"). Now the total contents of this blog are by no means all of the super-awesome ideas I've come up with, but it's enough of them. What I DIDN'T want to do with this blog was to 1) whine about my life and how horrible it is, 2) just post links to cool things OTHER people have done and 3) update about how I haven't had time to update.

I hate whining. And my life isn't awful or even excessively boring. What's the worst thing that happened to me lately? I had a fight with my wife this morning. Sure there was yelling. Sure there was screaming. Sure I kicked a robot (poor Roomba!) Sure we were angry. So what? That's not bad, that's a fight. That's a normal fight. I'll tell you what we didn't do. We didn't threaten to leave each other. We didn't say things like 'My ex-girlfriend never did this to me! That's why I like her much more than you!' We pushed each others' buttons and eventually apologized to each other. Wow. I'll tell you the number one thing I fear is getting in fights with my wife, but that's not the worst thing in the world. And how boring is my life? Well today I had to go through my Labview sub-VIs and re-arrange the connections one-by-one. I have about 20 of them. Yeah it's boring but I grew up on a farm. My dad used to have me stand in front of open gates and keep the pigs in just so he wouldn't have to close the gate and reopen it when he wanted to get out with the tractor. I did that for hours. In winter. I think I'll survive my office.

I also hate when people just link other things on blogs. 'Hey look at this cool kegerator robot!' Ok. That's good. But most of the EE/hacking blogs I read linked to that site on that particular day. It was a little excessive to read about it five different times, and I think we'd live in a worse world if I made it six. Just a link, no commentary, no analysis, no further ideas. Just aggregation. Go blagosphere. If anyone is actually reading this be assured I will NOT do that to you. If you, for some reason, subscribe to this blog you will not get that sort of 'content' (second meaning of the title!). You will get something original even if it's just meaningless ranting like right now (which, honestly probably isn't that original. Read Ecclesiastes).

And posting about how you have no time to post? No. Just Say No. Let me say one thing: it is essential to actively maintain something you want to build a following around. If you want people to read it then update it every day even if it's not particularly useful information or even tangentially related to your subject. People will either read it for the sake of reading or stop reading it altogether. But the people who read it for the sake of reading it will continue, daily, religiously. Then you can put ads in front of their face and get a shiny nickel. But it has to be NEW or at least something they don't ALREADY KNOW. If you dont' update your blog for a while then guess what? People know that. They can see it. So if you update and say 'I haven't updated for a while' they will respond 'We know' and leave. People aren't interested in things they already know. Better to update less often and with actual information than to tell people something they already know.

So that's my rant. Can you expect high-quality updates from me from now on? Probably not. I would actually have to do interesting things, or in fact, things at all. I'd have to have something to write about and it turns out I only rarely have that. And, if I start writing about non-engineering things I risk becoming like everyone else. And I HATE everyone else. They make me angry, hence the name of the blog.

Monday, July 6, 2009

When the robot is smarter than you...

...then you had best let it drive!

Teleoperation is one of the largest growth areas for robotics. All kinds of robots are being used in situations that are dangerous to people: battlefields, burning buildings, defusing bombs, etc. I don't know about you, but when I defuse a bomb or do any other precision work I like to have all of my senses about me. Heck, I like to have all of my senses about me when I'm walking down the street. You couldn't pay me enough to remove one of my senses or any of the well-tuned reflexes that my body has developed over time.

So why is it when we ARE paying people to build robots we ignore most of those sense? In many of the tele-operated robots I've seen you one sense: a teeny camera. And you have two little sticks that move your tracks or wheels. That's it. And what happens? You drive your robot around half blind into walls. Somehow when we design robots for autonomous operation we give them all kinds of sensors and instructions for how to deal with their input. If an autonomous robot saw it was about to run into a wall it would correct itself and move on. But somehow when we put a human in the loop we forget how difficult it is to control these things so we give it a camera and call it done. There's a human in the loop so why bother?

Why not give a teleoperated robot the same senses and reflexes we give the autonomous ones? The same senses and reflexes we ourselves couldn't function without. If a human operator is about to ram the robot into a wall then DON'T LET HIM! Put ultrasound transducers on there and when the wall gets too close stop. Chances are you're not trying to run into the wall but instead run parallel to it. Then this behavior works perfectly. Get more in-depth with it. There are cars now that will auto-parallel park. This is amazing and probably safer than letting me do it myself. Robots should do the same thing. For any complicated tele-operation there should be a way to do it automatically. Most people can't back up a truck to a loading dock without someone watching for them. Is this any different?

One obvious application of these ideas is an area that technically isn't tele-operation: powered wheelchairs. Have you seen them? They are beyond dumb as far as controls and intelligence are concerned. They consist largely of a battery, motors and a way to steer. Of course, at this stage of the design life for powered wheelchairs the main problems they are dealing with is not enough power and not enough battery life. But when these are dealt with you'll still have a powered wheelchair that is more than happy to run into anything faster than it could before, and for a longer duration!

For the sake of pedestrians in the vicinity and the paint jobs of objects nearby wheelchairs should be semi-autonomous. I hate to make generalizations but for the most part older people's reflexes and fine motor control are on a downward trend. For the people confined to wheelchairs it's likely very well gone. And there are many diseases that impair fine motor control. It's not right to expect our senior citizens to be able to control their wheelchairs perfectly with a joystick (in fact I wouldn't count on anybody's ability to control such a device with a joystick - the input method has to be somewhat tailored to the system being controlled and a joystick doesn't move the same way a wheelchair does).

The most basic of features you could put into a semi-autonomous wheelchair would be obstacle avoidance. All the walls would be safe as attempting to run directly into them would cause the chair to stop a few feet in front. You could not worry about crowding out others if the wheelchair by default hugged a wall as it moved forward and if it deftly maneuvered around a door frame without significant interaction then so much the better. If you wanted to sit at a table without knocking your feet on the table legs just have the automatic systems do it. Perfect fit every time. You could even define more advanced algorithms. If you wanted to be able to open a door from the wheelchair you'd need to move within arms' range of the handle, turn it, then prop the door open with the wheelchair and move through. It might require some extra hardware and a lot of testing to be able to identify the door frame, back up, prop it open, etc. But the benefit to someone who couldn't open that door before would be significant.

Let's face it - no one wants to be stuck in a chair. People weren't designed to move around in chairs we were designed to walk. Trying to adapt our thinking and motor skills to a physical system totally different than our natural method of locomotion is difficult for anyone. Automatic systems can and should make this easier on those who have no other choice. It's an improvement for quality of life for everyone involved just to let the robot do some of the driving for you.

Wednesday, May 13, 2009

Power Power Everywhere...

But specifically from the sun. We will never run out of power (never being equivalent to several billion years) because of the sun. Consider that most of our power comes from consuming things. Basically we burn things - we turn complex hydrogen-carbon chains into simpler ones and live off of the resulting energy output. Coal, wood, sugar - it's all the same. Combustion. The sun is better. The sun is powered by gravity itself. By the very shape of the universe! There's a lot more energy out there than sitting in every oil field, forest or coal mine. But how do we access it? The simplest method is by laying in the sun and getting warm. As much as I like a tan, I LOVE electrical power! So I use solar panels.

Let's talk about them for a second. Solar panels are a lot like batteries in some ways. You start out with solar cells which are maybe an inch square of real estate. They're made of silicon in an extra-special arrangement which causes them to produce voltage and current when light strikes them. But how much of each? There's essentially two main parameters that matter. The first is open-circuit voltage, the second is short circuit current. They're fairly self-explanatory: if you shine sufficient light on the cell and don't put an electrical load on it then you'll measure the open-circuit voltage across its output; if you shine a light on it and put a short circuit on the output then you'll measure the short-circuit current through the output. In between those points is not a straight line, but more of a graph with a knee. Take a look at the graph I show below and you'll see what I mean.

You'll notice that the cells are almost constant voltage sources except when you attempt to draw currents close to the short-circuit current. You can either use a simplistic model of a constant voltage source or you can use a more complicated model that accounts for the actual behavior more:

If my calculations are correct, Il is the short-circuit current and the open-circuit voltage is equivalent to Il*RSh. Props to Wikipedia for having such awesome images that I can steal for free. I like free.

So that's your single solar cell. They're a lot like the cells of a battery as I mentioned last time: to get a useful voltage out of them you have to put a lot in series. The difference between these and batteries is that you also typically have to put several sets of cells in parallel to get a decent current. I have an 18V solar panel made out of perhaps 32 solar cells - 8 in series and four sets of these in parallel. Its open-circuit voltage is 18V and short-circuit current is 300mA. This actually makes it rather useful for charging batteries!

When charging batteries you need two things at the same time: voltage and current. Well lucky for us voltage and current together are POWER! And we can get power out of our solar panel! Success! Just connect wires to things and make it go! Well, not that easy. You need specific voltages and specific currents to charge our batteries. I have some Ni-Cad batteries that need to be charged at a maximum rate of (off the top of my head - don't hold me to this) their total capacity divided by 10. So I have 5 batteries that have a total capacity of 6A-Hours, so the max rate I charge them at is 6/10 = .6 Amps. And for charging these batteries current is the most important part: as you push more current into them their voltage goes up and you stop putting current into them when they reach about 13.8V. So you need your voltage to be at least above whatever the battery voltage is at the moment. Just figure you'll need at least 13.8V to get this to work.

So we have a specific power IN from the solar panel and we need different power OUT. My first guess is to use an LM317 in constant current mode. However this has problems. The main one is that as you try to draw constant current out of the solar panel its voltage will drop. And the LM317 is a linear device. For linear devices the rule of thumb is that the current into the device is the same as the current out, and the voltage out is less than the voltage in. Thus, you will need to make sure that your solar cell voltage stays above the battery voltage. Good luck, because even if you do make sure of that you will be dissipating the 'extra' voltage from the panel IN the LM317 as heat - thus losing it. Solar panels aren't amazing power sources, so I'd rather not waste any of the power that it does generate.

You can also do power transformation with a switched-mode device. The rule of thumb for switched-mode devices is 'power-in equals power-out'. The only loss is due to efficiency. Another great thing about switched-mode devices is that they can produce nearly any output voltage off of its input voltage. So you can always keep your output voltage above the battery voltage even if the solar panel voltage goes below it. The trick is that switched-mode power ICs are usually set up as constant voltage supplies. They employ a feedback voltage to set the output to the right voltage. They're a little control system! I love control systems! SOOOOO CUTE! It's contained in a single IC! Adorable!

Anyway, I know about control systems and I reckon that I can modify the feedback signal so that it's based off of the current going to the batteries instead of the voltage at the output. I would current sense resistor to monitor that current and then use a differential amplifier to amplify and scale the voltage from the current sense resistor. This would be the new feedback signal to the switched-mode power supply. Thus it would regulate its voltage to make sure that the proper current goes into the batteries.

My fingers are yet again getting tired so I'll mention that this is just the battery charging circuit. You still need an output stage connected to the batteries to power things off of the batteries. Also, there's an improvement on the simple battery charging circuit I described here. It's called a Maximum Power Point Tracker that gets even more power out of the panel than before.

Thursday, May 7, 2009

Wherefore Art Thou, Camping?

Exactly how many doohickeys can you take with you camping and still be considered to be 'roughing it'? I know many people for who camping means 'in a camper', i.e. they need their flat panel TV and fridge full of beer or they're not going. I will sleep in a tent, so I consider myself somewhat superior to these people. But being an electrical engineer I just LOVE gadgets and I can barely stop myself from bringing them along. I know not to even THINK about bringing a laptop, but I'm still bringing our battery-powered fan. Of course, instead of batteries I'm using an emergency jumpstart car battery thingamajig with a cigarette lighter power adapter. If you want to power AC devices you'll need an inverter - just make it has sufficient maximum power (I've experienced the letdown of not having enough power firsthand when my friends and I figured out we couldn't run a smoke machine off of an inverter plugged into a car. Memories!) I haven't graduated to thermoelectric coolers yet but soon I will.

The question is how to you power all of your gadgets in the wild? A car battery is a good start but for a perfectionist like myself it just won't do. Car batteries are heavy. If you have to carry anything any distance you'll with you hadn't brought it. You can find better batteries than the lead-acid car batteries. Nickel-Cadmium, Lithium-Ion and Lithium Polymer are all better options (i.e. greater power density per kilogram). Technology is only one thing to look for when choosing a battery. There are four main things to look for in a battery: Voltage, Capacity, Current Capacity and Technology/Battery Type (Ni-Cad or Li-on or whatever).

Voltage is easy for the most part. Car batteries are 12V nominally and for the most part that is the magic voltage. Since your car runs on 12V everything that plugs into your car runs on 12V so there's lots of devices out there that will happily run on this voltage. Other popular voltages are 9V and 5V, but if you have a 12V source you're in luck because you can use dead-simple linear DC-DC conversion to get down to those voltages. Look up the LM7805, or in fact, everything in the LM78xx line (the last two digits are the nominal output voltage). They're really simple to use - for an electrical engineer anyhow. Fun fact: a battery is not a single unit - it's made up of cells. Depending on what the battery is made of, each cell has a different voltage. It's intrinsic - entirely dependent on the technology used in the battery. You get other voltages by stacking these cells up in series. Lead-acid is 1.5V per cell I believe, so a 12V battery has 8 cells in series. Another fun fact: it's not called a battery unless there is more than one cell used in it.

Capacity of the battery is also straightforward. It's measured in Amp-Hours. If my battery is rated at 7A-Hs then it can in theory deliver 1A for 7 hours before it's empty. Of course this is simplistic. Voltage degrades as the battery empties, so if you need 1A at 12V for seven hours you may not be in luck. After hour three the voltage may go down to 11.5V, then down to 11V next hour, etc. If you're using linear step-down conversion then you're out of luck for a couple of reasons:

  1. You need a certain minimum voltage difference between the input to the voltage converter. If your battery voltage goes too low then you won't meet this requirement and you won't get any power at all. This can be alleviated with a switched-mode voltage converter

  2. Assuming the power output is constant, lower volts means higher amps. As your voltage goes down your converter will draw more amps just to have the same amount of power and this will cause your battery to drain faster. Switched-mode voltage converters are not immune to this

The degradation in voltage is a function of the battery technology, so your mileage may vary.

Current capacity is distinct from the other capacity. Current capacity is the ability of the battery to source large amounts of current. Not all batteries are equal in this respect. As with any non-ideal voltage source a battery has a certain amount of internal resistance. This means as current flows out of the battery, power is dissipated in the internal resistance. This creates heat. Heat causes explosions, especially when it's applied to exotic chemicals. Remember when Dell's batteries were exploding? They got too hot. They got too hot because despite the fact that Dell spec'd the batteries with a peak current of say 5A for one minute, the manufacturer ignored that and went cheap on the batteries. This caused them to heat up when the laptop pulled the amount of power from them it thought safe. They got so hot that the chemicals got angry, and *POOF*! Exploding laptop. In general batteries are high-performers for current sourcing - that's the reason your emergency jumpstart kit works better than the alternator on your Yugo - current capacity.

Technology is a good one. I'm not into chemicals and ions and things but it's somewhat exciting to see the effect that all of that sciencey stuff has on actual performance and use. As I said before, technology affects all sorts of things on a battery: nominal battery voltage, voltage degradation over time, capacity to weight ratio, discharge capability, charging method, discharge behavior, etc. In general your newer battery technology has better capacity to weight ratio, less voltage degradation, higher capacity and greater current capacity. What it doesn't have going for it is simplicity and reliability. Simplicity in that they're more difficult to charge. Lead-acid is dead simple: apply 13.8V to a 12V battery and let it go for a while. It will charge completely. Lead-acid is surprisingly hard to damage as long as you don't exceed its recommended voltage. Newer technology is harder - it's rather finicky about how much current you push into it and what voltage and you must stop trying to charge it when it's done or else you'll break it. As for reliability consider that you can discharge a lead-acid battery almost down to 0V and it will still live - just charge it up again (deep cycle maritime batteries are the best for this). Most other battery types can't handle this. If you let them get too low then they're dead.

My fingers are tired so I'll leave until a different time the discussion of the solar-powered power unit I'm making. For my camping gadgets of course!

Monday, May 4, 2009

Firmware in all things

So today I'm home from work for a bit to wait for the repairman (or repairwoman) to come and take a look at my dishwasher. It has lights flashing on the front and won't do anything no matter how many buttons I press. You may be wondering why I don't just fix it myself. Despite all of the experience I have with fixing household appliances (none) and my familiarity with similar devices (huh?) I've decided that it's just not worth my time to try to fix the thing myself (although having to stay home from work to wait for the repairman somewhat negates that position). I tried some things. The manual says that it may be a bad heating element and to check the wiring. I checked the wiring (well, looked at it anyhow) and nothing. Some advice online said to try pressing a sequence of buttons in rapid succession - nothing. I gave up after that.

Why? Because it's obvious this isn't a hardware problem. And if it's not a hardware problem it's a just plain HARD problem. Mechanical systems are easy - they either work or they don't. No ill-defined states, no invalid inputs, no built-in tests. If something can't happen it physically can't happen. If one gear is moving then the gear in direct contact with it also has to be moving. If something sounds wrong it's probably directly linked to the problem - just follow the connections.

But somewhere in this dishwasher is firmware - code. Code breaks all of the rules. Impossible things happen in code all of the time. Jump to the wrong memory address? You could end up executing impossible code. 'This shouldn't happen! I never called this function! It's impossible!' Mess up with pointers or indices? You could start reading impossible values. 'Umm, an unsigned 8-bit integer CAN'T be 1024....' Use a case fall-through instead of explicit checks? 'That's IMPOSSIBLE! That value isn't handled! Why isn't it going to default...'

Code is arbitrary. I have a flashing light. That's not a symptom of the problem - it's an indication. I need a manual to tell me what it means, and if that doesn't suffice I need the REAL manual - the one they only give to the service technicians (well, sell anyhow). It means whatever they tell me it means. A mechanical device is not arbitrary. If my engine is overheating it doesn't do something illogical like lock the drive shaft automatically - it just starts heating up. All according to rules Mr. Newton figured out (differential equations baby!). You can track it and explain it with rules you find in your high school physics book. The code only follows the rules that Mr. Programmer set forth and he's not bound to adhere to any standards and even if he was he wouldn't tell you. Furthermore, the state of a machine is obvious and can be ascertained by observation (perhaps complicated observations, but still, observations). Code need not give any indication of state and typically doesn't, or it's not very useful (green light - good, red light - bad!)

And if you aren't rigorous in your code you can have undocumented behavior. It's entirely possible for you not to be able to reach one of your states or not be able to leave it if you make simple mistakes. And if your testing doesn't catch it then you'll have to reboot your dishwasher or some other such silliness.

Computers are very powerful, but simple. They execute instructions at memory addresses. Everything else is defined by the designer. If he or she is incomplete in the definition of the system then irrational behavior follows. Mechanical systems are bound by the laws of physics: they cannot perform physically impossible functions, they must move smoothly between states that are defined by their physical characteristics alone, and failures are characterized by smooth transition to a new state(one which is obviously broken). Software doesn't follow these rules and that makes it much more difficult to troubleshoot.

PS: It was bugs Living on the electronics. Dirtying them up. I'm not a dirty person I swear. It wasn't a vague error code or random impossible to get to state, it was just flat out broke. At least even computers have flat-out broke modes...

Thursday, April 30, 2009

Laziness Continued

I am a fan of taking steps out of processes. Before tonight in my TinyCAD library creation process I had to do a few things:

  1. Update my CSV file with new attributes and/or data

  2. Update the schematic symbols in TinyCAD

  3. Export the symbol library as XML to a certain folder

  4. Run a Python script to turn each symbol in that XML file into its own XML file

  5. Run my main library creation Python script

  6. Open TinyCAD to check for errors. If any symbols are wrong, repeat from the second step

I hadn't really used it earnestly yet but it seemed like too many steps. So I decided to add functionality to my library creation script to pull the symbol data from the symbol library file directly instead of XML files created from that library. Tonight I got it to work, so now the process is:

  1. Update CSV file

  2. Update symbols in TinyCAD

  3. Run library generation script

  4. Check for errors

This is a great improvement. I may actually get work done now!

Tuesday, April 28, 2009

Exceptional Programming

I grew up programming. My first computer was a Laser 128 - an Apple IIc clone. I learned AppleSoft BASIC. Those were the days when the command line was also the BASIC interpreter. I programmed everything on that computer. Well, if 'everything' is some science fair projects and cute programs with blocky graphics. I grew up and got an 8086 and cut my teeth on QuickBASIC, then eventually C. In college I was taught C++ and I also picked up Perl and PHP. But despite all of this I never learned about exceptions until I earnestly got into Python.

We see exceptions all the time on our computers. "Firefox caused an exception in blah blah blah blah." I didn't really know what it meant past 'the program screwed up'. I had done most of my programming in C which doesn't have exceptions (or if it does I've never used them). Sometimes I screwed up in my programs and all sorts of crazy things happened. Gibberish printing out on the screen, random lockups, the computer starts beeping and won't stop, etc. I can't be sure, but once I think I caused the gibberish to come out on the printer. I'm probably imagining that though.

But after working with Python I found out those screwups were actually called exceptions. And what's more, you could handle them. Just put everything inside a try statement and if something bad happens you can catch it below and go on your merry way. It's great to just be able to deal with it and continue. But I noticed something. Exceptions weren't always exceptional. Try to access element number -1 of an array? C would happily let you do it and you could go crazy trying to figure out why. But in an exception-driven language it would catch it for you and stop you. But some functions throw exceptions instead of just telling you that you did something wrong. Well, I suppose that's HOW they tell you you did something wrong. But if you have three statements inside a try block and you just get a generic exception back, you're going to need to do more work to figure out what went wrong. And imagine my horror when I learned that some people will use exceptions for something as mundane as input validation! Maybe I'm just old fashioned, but in C you did your own input validation. You didn't just accept whatever the user gave you and then throw a fit when it turned out to be wrong. But what's worse is that this wasn't on your run-of-the-mill PC, but on an embedded device!

Exceptions? On an embedded device?

I've always been told that embedded devices needed to be ultra-reliable. All failure modes had to be accounted for and handled. True, exceptions are one way of handling failure, but for a lazy programmer it's entirely too easy to wrap the entire program in one big try block and just restart in case something bad happens. It's too easy to not even plan for the failure modes because you have exceptions. That may fly for your DVD player but certainly not for your airplane.

So what's the alternative? For one, strict input validation for everything - not just user data. Don't even assume that your own code will always pass you valid values in functions - always check! But then how do you signal an error? In C many functions would return a non-valid value if there was an error. For instance, abs() might return -1 if there was an error (and you had better check!). But this scheme doesn't always work. For some functions there may be no possible invalid values. You can't take a chance on using a valid value as your error signal. Sure, you'll bury the true meaning of the return value somewhere in the documentation but honestly who's going to look until there's a problem? And by then they've already cursed you for being so tricky.

No, the solution is to return a status along with your return value. In some serial systems the receiving device will respond to any message with a status message to tell you that it successfully received your information. Typically zero is 'OK' and everything else has a specific value - either the byte as a whole means something or each bit has significance. This can be done with functions as well. You can either pass all of the parameters as reference and the status as the return variable, or return the status and parameters in a struct if you don't like pointers. Of course it may be more work to work with a struct (and it's dangerously close to object-oriented programmng!). In this way a function can tell you that it failed and why it failed. You can use an enum to define all of the different error messages and then handle each of them. Of course some errors cannot be handled by your code alone. If data shows up late in a control system there's not much you can do about it except not use it in calculations. And your status returned will tell you if it was late (assuming you check it).

Exceptions are very good form for PC applications but it's easy to be lazy with them. Embedded programming sets a higher bar for the programmer, so bring your A game when developing for embedded devices. Always check inputs, parameters and status returned to make sure that you are working with valid data. The passengers on your airplane will thank you.

Update: I had a conversation with one of my colleagues on this issue and we're of the same opinion - in a sense. In essence he agrees with me except but he thinks exceptions are a valid method of achieving the same goals. BUT - we both agree that in embedded development you shouldn't be catching general exceptions or just wrap your entire main() in a try block. You have to know what failure you expect to happen and exactly how to correct it in EVERY case - exceptions or not. The way exceptions work on PCs (Oops, Firefox caused an exception - it's quitting) is unacceptable for embedded development. I shudder to think that the same approach would be applied to an embedded device by an unaware programmer, but that doesn't indict exceptions in general. Just their misuse.

Saturday, April 25, 2009

Laziness (and Python)

I wrote enough last time to scare some of you about serial communication. I was going to write about the design of a simple serial messaging protocol to make all of you readers feel better, but my laziness got to me. I mean, it's all designed in my head, but I figured I'd need tables and diagrams and figures and such to really explain it well. That riled up my innate laziness, so I decided to write about that instead.

To put this in some perspective, I am supposed to be the library maintainer for TinyCAD. It's an open-source schematic capture program, and development on it just started again after a few years of languishing. My duties are to put together libraries of basic schematic symbols (resistors, capacitors, diodes, some ICs, etc) for other people to use.

I'm not doing a good job. I have released no libraries yet.

I do admit - I'm lazy. And this job requires a fair bit of tedious work. The libraries are stored in Microsoft JET database format and the symbols can only be edited with TinyCAD's built-in drawing tools. They're OK, but not if you want to make two dozen symbols at once. Or worse yet, make slight changes to two dozen symbols at once. Or just add a bunch of meta-data (part name, manufacturer, part number, etc). You have to create all the fields for every part, edit them manually, save, etc etc. The entire process was not made for batch creation and editing. I was in danger of getting absolutely no work done at all. Or worse yet, I was in danger of doing lots of tedious work and then re-doing it when I needed to make slight changes to every part I had already made. Something that realistically I should only have to do once and then forget about it.

I am no fan of manual operations, especially when they are tedious and repetitive. Humans are not geared towards that sort of work - we make more mistakes and are much less efficient than a small shell script. When I was making wirelists my brain would just shut off after a while and I would do all sorts of really wrong things. Or I would cut and paste things that shouldn't be cut and pasted because although I thought two things were the same they weren't. My wirelist had tons of errors that luckily weren't too expensive to fix. But the lesson is clear - don't use people for repetitive operations. They're bad at it.

So when I was faced with the prospect of doing just that for these libraries I said 'no' and started immediately with the laziness. Laziness is bad of course because it keeps you from getting work done. And since I wasn't required by force of law or paycheck to make these libraries I wasn't too insistent on starting a process I knew would be frustrating and error-prone. I did nothing until I learned about Python.

Python is a scripting language that is designed to be easy and do everything. It is very close to succeeding. As I said before these symbol libraries were made with Microsoft Jet Database. It's the back end of Microsoft Access. I am not entirely impressed with Microsoft products but I knew SQL queries and it supports those so I had some baseline. And I had Python. A little searching and I figured out which module I had to import to allow access to the database. A little more searching and I figured out how I could insert the BLOB (Binary Long OBject, or maybe Binary Large OBject? - it's how raw data is stored in fields in databases) for the symbol drawings. Then I said to myself, why don't I store the symbol text data in a CSV file so I can just type things out once? Python has an import for that too. TinyCAD can export the parts data as XML? Great, I can import DOM to access the XML and create a CSV out of the current libraries.

If you've ever used Matlab scripting you'll feel right at home. Heck, there's even SciPy - it mimics many of the functions of Matlab (graphic, matrix operations, basic math, etc) in case you're too poor to buy Matlab. You can use wxWidgets or any other GUI library to create GUI apps. You can access almost any database. You can draw with TKInter. You could lose your voice from listing all the things you can do. It's easy, comprehensive and powerful.

Python gave me a reason for my laziness. I can do more work, more accurately and faster than if I had tried to do everything by hand. Bottom line: if you're avoiding doing some tedious work, pick up Python or any scripting language and do yourself a favor by making a tool to do the work for you. You'll thank me.

Tuesday, April 21, 2009

The scary world of serial protocols...

Serial is one of those things that may seem easy at first glance but in reality is so complicated that you want to hang yourself. I used to see plenty of job descriptions that required something along the lines of 'knowledge of serial protocols'. What a joke! 8-N-1, 115200bps, nine pin serial connector DONE! And you get BITS out the back end when you're done. Are people afraid of BITS?

Some people are afraid of BITS (I'm looking at you CS majors), but that's not the trouble with serial protocols. The trouble is that 'serial protocols' is really vague. The first protocol you might think of is good old RS-232. It's a lovely standby, 8 bits per message, no parity, one stop bit and your choice of baud rate. Why would anyone want to deviate from that? But what do all of these settings actually mean? It's swell if you have a GUI to enter these values in, but it gets harder when you have to configure a microcontroller with nothing but assembly. And even then what actually HAPPENS when you send a message? What does the waveform even look like? These may be idle questions for you until the first time that something doesn't work and you have to dig deeper than the GUI to fix it.

For instance what are the voltage levels of RS-232? The answer - heh. There is no answer. According to the standard it's supposed to be +/- 15V. That'd be easy except that no one follows the standard!. You might get +/-12V, or 0 and 5V (TTL levels). The scary thing is that most of these work because transceivers are often not too picky about voltage levels. And then when you figure that out you'll be surprised to know that the voltage levels are the inverse of what you'd expect - a '1' is -15V, a 0 is 15V. If you don't know that you'll have at least 20 minutes of confusion.

Add to the fact that even if you've figured out your RS-232, there are MANY more serial interfaces out there - all different. I2C and SPI are synchronous - there's a clock signal transmitted unlike RS-232. You don't have to worry about addresses with RS-232 since you've only got two devices communicating, but not so with the others. And have you even thought about hardware handshaking? Unnecessary with RS-232 but crucial to the others. I hope you can figure out why you need an open collector output with I2C...

And don't even get me started about data representation. Do you think bits are just bits? That 0x35 is always equivalent to 53 decimal? Not so fast. That could be ASCII-encoded which would mean that it represents the digit '5', not a decimal value of 53. If you look at your serial data stream in HyperTerminal the output you are seeing is decoded from ASCII. That means that 0x35 will display '5', not 53 decimal. ASCII encoding is useful when data is being displayed on a terminal, but it's also used in other circumstances where you'll tear your hair out because of it.

For instance many serial buses have packets with headers, footers, checksums, etc. Packets are usually started with 0x02 hex and ended with 0x04. You can transmit the length of the data packet when you send it so that your device will know when to start looking for another one, or you could choose not to. In that case, what if the data you're sending has an 0x04 in it? That would tell the device to stop listening and would ignore the rest of the data. So, you encode all of your data with ASCII. If you need to send the value 0x04 (decimal value 4, obviously) then you encode it to ASCII and send it as two bytes - 0x30 and 0x04. When you receive it, chop off the 3's and push the nibbles together and get 0x04 again, there's your data.

And that brings up another point - nibble order. Do I send the least significant nibble first (the '4') or the most significant ('0')? What does the standard say? There isn't one. You have to be told how to interpret the data by whoever put it together, and God help you if you didn't do it yourself.

Suddenly 8-N-1 doesn't sound as simple as it used to. Yes, it's all just BITS, but they're scary bits. They're bits that mean whatever the person on the other end of the conversation wants them to mean. You have to parse them, reorder them, combine them, split them and take a magnifying glass to them to make any sense of it. Watch out kids - it's a jungle out there.

Wednesday, March 4, 2009

Better pathfinding

I think Computer Scientists are so cute when they pretend that they can build robots. Now I'm not knocking all of them, but there's a select group of shall we say unworldly computer scientists who are too far up in the clouds for robotics. Software is of course very necessary for robots but so are electronics and mechanics. Just TRY to build a robot on your own or even work with a pre-made robotic base if you don't understand the first thing about torque vs angular velocity and how it relates to power. Or beat your head against a wall when your robot takes a heading of 3 degrees vs the 0 degrees you told it to. Why DOES it do that? Surprise! Your sensors aren't that accurate even though your numerical precision is.

Anything having to do with kinematics is going to confuse the bejeezus out of someone who isn't familiar with it. True, it's all stuff you learned in physics but the devil is in the details. It took me a good month to really internalize the roles of current and voltage when dealing with motors (in short - current is torque, voltage is speed but it's only this simple at steady state!). And while it's very straightforward to say the reason your car can't turn on a dime at 50mph is momentum, it's a much harder thing to tell me exactly how quickly your car CAN turn at that speed. This is the realm of physicists, mechanical engineers and me - the controls engineer. It's part of my job to do the calculations that tell you how much effort you will expend to turn your car at that speed (and whether it's safe).
So it's no surprise to me that I hear more about what data structures to use for pathfinding, how to make it computationally less complex and whether it always produces the shortest-cost path (all GREAT fodder for computer scientists) than I hear about whether the path is actually something useful. Most discussions I've heard ignore what will actually happen when you attempt to follow that path. For instance - if your GPS has you making 90 degree turns every 5 seconds I don't care how 'least-cost' your path is in terms of the criteria you assigned because it requires too much effort (starting, stopping, slowing, turning, etc). It also certainly won't be the least time intensive path.

The level of 'smoothness' of motion is a concern in controls. Jerky motion requires much more power than nice smooth motion. It's hard to get things to stop and change direction all the time. I once worked with a digital servoamplifier which had several different 'motion profile' modes. I needed to feed it position commands every .1s. So the first mode I tried had it move as fast as possible to the commanded position and then STOP. Then start up again when it got the next command, move as fast as possible and then STOP when it reached the position. That's the first thing you'd try if you had never designed anything like this before and it turns out to be an awful approach. A much better approach was the second motion profile - attempt to arrive at the commanded position but keep moving in the expectation of the next position command.

Now apply this to pathfinding: Are you going to go from Point1 to Point2 then STOP, change direction and go to Point3 (then STOP)? Not ideally. You want a smooth path. You need to take into effect several things: current speed, heading, difficulty of turning, etc. You make a model of your vehicle and compute how difficult it would be to actually execute all of those crazy moves the algorithm things up. So instead of just finding the shortest (distance) path and you weigh the two factors against each other. Just make sure that your weights prevent impossible actions ('Go straight ahead at 50MPH then stop after 50.18 yards and make an instant right turn and immediately resume 50MPH. It'll get ya there in 23 seconds flat. LEAST COST BABY YEAH!'). This will lead to more realistic paths whose costs more accurately affect realistic situations. That's better for everyone.

A quick implementation of this would be to add it to the movement cost in A* (I've heard this called the G score of a square). In the simplest A* algorithm I implemented the movement cost for a 90 degree movment (forward, back, left, right) was always 10, and for a diagonal movement was 14 (sqrt(2)*10 because of the added distance to reach a diagonal square). I would keep track of my current heading in terms of degrees. Assume that the top of the screen is zero and right is 90 degrees, bottom is 180 degrees left is 270 degrees. Heading in this case will be either 0, 45, 90, 135, 180, 225, 270 or 325 degrees. Just add a cost proportional to the change in heading to the G score - this will keep your path from having too many jerky motions. You'll have to fiddle with the weight to balance least cost vs. least effort (you don't want a lazy robot do you?). You can then get more advanced by incorporating speed and momentum into the model. To incorporate speed, allow the pathfinding algorithm to search more than just the squares around it but with a caveat: it's difficult to speed up and slow down. Thus, my normal speed is 1 square/turn - that gets me to the forward, back, left and right squares for a cost of 10 usually. When I want to go to the diagonal squares I have to increase my speed to 14. If I want to go to the forward square two squares ahead of me my speed has to be 20 - I have to speed up. Speeding up takes effort so add to the G score a cost proportional to the absolute value (absolute value because any change in speed requires an effort - you don't gain effort for going from hi speed to low speed) of the difference between my current speed and next speed AND use the speed in the effort cost calculation for turning (higher speeds make turning harder).

Once you start down this path there are many physical parameters you can add to the system. I have yet to implement this but I look forward to some excellent pathfinding in the future!

Thursday, February 19, 2009

Process Process Process!

I know I probably won't make any friends with some engineers when I champion process. Many (younger) engineers are free spirits that love to do things their own way. To them each problem is a unique flower that requires a highly-specialized engineering ninja to swoop in, dismember the flower's head, toss it in the air, slice it up and carefully place a decorative plate underneath the pieces as they fall to form an intricate and subtle artistic pattern which he will present to the customer. At least that's his intent. Conversely, a more seasoned engineer will look at the problem and say 'Oh God not THIS again....' and go back to his back of tricks to pull out the Excel spreadsheet or Python script or piece of hardware that solved this problem for him in minutes last time (after taking hours if not days to create in the first place!). The seasoned engineer isn't trying to impress anyone - he's trying to get the job done. Last time he had this problem he DID try to impress someone and ended up hurting himself by taking way too much time to finish - he probably missed a deadline and maybe a large chunk of money by trying to be clever and 'unique'. So he figured out a process to get the job done quicker with fewer missteps. He wants to help the younger engineer by teaching him how the process will help him save time and effort, but what the younger engineer hears is 'Don't be creative! Don't be clever! Fit into this box!'

The truth is we all want to be creative to some degree. It's part of our hierarchy of needs - right at the top. Many people see process as drudgery that inhibits creativity. Who wants to file a bunch of forms to design something? Who wants to document when they could invent? And I agree. I would question the truthfulness or sanity of someone who said they truly enjoyed documenting their latest Lego creation rather than spreading the pieces all over the floor and putting them together. But we have to have a sense of perspective about this. Documentation and process save us from drudgery and boring work. It's not FUN to have code that just won't compile, or a bug you can't track down for the life of you. It's not FUN to build a Simulink model that does nothing other than vibrate itself to pieces. It's frustrating and sometime humiliating. It certainly doesn't help my self-esteem to fail so much.

So when you DO figure out how to make the program compile or the model work, you write it down. You share it with everyone - partly to prove you're smart, but partly so they don't have to go through it like you did. And then when you get a few more good tips you compile them together, format them, codify them, and put them somewhere that everyone can see. You keep track of the resources you used, the rules you followed, the steps you took - all so no one has to go through the same experience you went through: just follow this process and go right from A to F, skipping B-E! Yep, you just created a process, and now some young blood is telling you that you're cramping his style and he's going to do things his own way. Oy.

Folks, process may not be glamorous but it's there to help you! It's there to make sure you don't have to do extra, unnecessary work. It's there to help you get to the fun part more quickly. Trust me - someone who was there before you figured all this out and wants to save you the trouble. Just go with it.

Thursday, February 12, 2009

The Dangers of Object-Oriented Programming

In theory I love OOP (Object Oriented Programming) - up to a point. I love having properties. If I want to set one of my discrete I/O ports active then I like to be able to say something like DIO[0].State = Hi. Perfect right? It's at least better than, say: DIO_State[0] = Hi. I think that's a little sloppy.

I'm not so hot on methods. Someone who is might try this: DIO[0].Assert(). Of course you'll also need DIO[0].Decline() (? I'm not fully sure on the opposite of assert at the moment. It's been a long day.) But then you also need something like DIO[0].Tristate() and DIO[0].UnTristate() and DIO[0].Toggle() which start to get a little hairy.

But why do you need those? Oh, that's because you made the State property 'protected' so that people can't set it directly. That's another tenant of OOP - don't let people touch things that they shouldn't. Instead provide 'safe' methods to change properties instead of letting people do it themselves (because they're EVIL! EEEEEVIL! They'd set your properties to invalid values and THEN what would you do?)

And then you go farther! Obviously since you save time when you use objects (another tenant of OOP) you decide to use them everywhere. So you've got relays too, and relays are kinda like DIO right? They go on and off - it's perfect! We'll just extend the DIO type to encompass relays too. So I can Assert() and Decline() relays to close and open them, and I can toggle them and I can Tristate() them... Ummm... wait... Relays don't tristate, do they? But it's a method of the relay object type so maybe they do? I forget...

Ok! Wait! I've got the answer! OOP also has this thing called inheritance where you can define an object type and then create more specific types from it (and from reading the tenants of OOP it seems this is good design practice). So I'll create an object with basic on/off functions and then extend that into DIO which will have tristate functions!

So now we have a parent type that I'll call a, ummm, 'Boolean' object. And Boolean will have a 'Open_Switch' method for the relays and a 'Close_Switch' method.. Or, hmmm. It won't make sense to 'Open_Switch' a DIO port, so we'll just have to call it something more general like 'Frob'. Yeah, 'Frob' will set the DIO high or close a relay and 'UnFrob' will do the opposite. And then 'XFrob' will toggle.

This is really coming together!

Of course that was all in my imagination and I made it worse than it had to be (a little anyhow). But it shows the problem with using OOP which is OVERusing it. Writing methods for toggling and setting a DIO port - sure go ahead. Extending a DIO type to relays? No, too far. Don't get sucked in to using things like inheritance just because it exists. A relay is not a specialized DIO port even if they have a couple of the same functions and properties!

PS - It seems Jeff Atwood agrees with me somewhat.
PPS - And again

Now I know he doesn't perfectly champion every single word in my post but I think we agree on general principles here.

Thursday, January 1, 2009

Killer Tip

Well, it's not killer per se. If you've had a bit of experience you probably already do this. I've just started to document what I'm doing with each of my projects so I can start right off the next time I want to work on it (instead of spending a half hour figuring out what the next step is). I just create a file called TODO.txt in the main directory for each of my projects and I update it every time I work on the project saying what I did and what's probably next. Hopefully that will keep me up to date on what I've done.

Now here's the REAL killer tip: automatic timestamp. If you use Notepad++ hit Ctrl+F5 to automatically enter the date/time where the cursor is. And this, THIS is the real killer. It comes from good old Notepad. That's right, Notepad.exe takes the cake for awesome features with this beauty: create a new TXT document and type '.LOG' (no quotes) on the first line. Save it, close it, then reopen it. The date and time is automatically entered right above your cursor. Just start typing.

I can't believe Notepad of all things has such a useful feature. It doesn't even have proper 80 character line wrapping.

(Edit) I've done a little searching and found out that you can also add the timestamp in Notepad by hitting F5. Not nearly as cool though.