Wednesday, December 14, 2011

Labview is Actually Not All That Bad

In stark contrast to the flurry of epithets I hurl at Labview on a daily basis at work I will now say good things about it - because it deserves it!

The blocks are actually pretty old school. If you get stuck trying to decode a string in Labview your tools look surprisingly like what you'd expect from the C standard library. Granted - you don't need to worry about NULL termination or anything, but most of the basic building blocks (like string subset, search and replace, etc) promote code that acts (in form) like old-style C code. It's not like writing the routines yourself (iterating through each character, comparing, discarding, buffering, etc.) but the solution you come up with to manipulate strings will probably feel like an old-style C solution does. I also like the way the control structures work - you get nice features like shift registers (that teach you about instantiating a variable before the loop) or straight-through tunnels that just take the last value from the loop and pass it out. You can make complicated while or for loops with conditional terminations or conditional continuances. These are very welcome forms to have present. There's a lot of nice tools and abilities in Labview that don't feel new and gimmicky, but instead old and tried.

Second, while most Labview VIs are just a mess plain and simple what I've found is that a messy VI usually means messy badly-organized code. A clean, well-organized VI means you've created sub-VIs in the right places, organized the right data into clusters, used the right type of loop, etc. If your VI is clean and organized, chances are your code is too. Thus, by seeking to visually clean up your block diagram you can actually write good code. Of course, the opposite is not true: you can have great code that still looks like a mess. Because you just can't get around that in Labview sometimes. But chances are if the block diagram looks good the reason is because of a highly-organized coding mind behind it. It helps people learn how to organize code by presenting it as a visual problem instead of an esoteric abstract one - and many people are simply visual learners.

Third, I'm ecstatic as hell that whenever I Google anything related to Labview I come up with an answer - period. Someone has tried to do what I'm doing or for some insane reason National Instruments make a guide on how to manipulate TestStand sequences programmatically from Labview. Quite simply put I spent hours trying to figure out a problem on my own and when I get the bright idea to Google it the answers is invariably THERE. Done in five minutes. At this point in my career I'm a little more focused on results than banging my head against a wall to 'learn' so that feels really good. I have no idea why all of this support is out there but it is and it makes me happy.

Fourth, Labview is actually free of a lot of the object-oriented crap that plagues many trendy languages today. Yes, Labview is actually a bit trendy by itself, but (and this impresses me because the more I think about it the more I realize it's true) Labview is actually pretty old-school. Some old-school hard core stuff you can't do (like function pointers - but then only kinda) but I'm pleased that I haven't seen the term 'inheritance' once when dealing with Labview. True, all the things that make me hate C++ might be in there but I haven't been forced to deal with it and I haven't seen anyone else's work that deals with it either.

Don't get me wrong - I still find plenty to hate about it (and I may get to that later) but the more I consider it Labview feels like C but graphical. And that's not terrible.

Wednesday, October 5, 2011

Good Coding Practices #...?

I have a semi-ongoing series in good coding practices... to the extent that I ever update my blog anyhow. Lately I've taken up work on an iTunes Android Sync tool. Is anyone else amazed that there are very few good programs that will sync iTunes playlists to an Android device? There are some out there, certainly but for some reason each of them tends to have one or two major flaws: way too slow, randomly renames your songs to the titles of different songs, costs money - the usual complaints. So I looked into it and it turns out that iTunes maintains an XML version of its library information file. But then I discovered that the Android playlists are stored in a highly technical format called ASCII-encoded text files. Let me tell you, it took forever to crack that puppy.

Well when all you have is a hammer every problem looks like a nail. When you have Python every problem looks... easy. So I decided to make a Python program to:

1) Read the XML library file.
2) Figure out what playlists are in there
3) Figure out what songs are in those playlists
4) Figure out where those songs are
5) Make Android playlists from the iTunes Playlists
6) Copy the playlists and music files to the Android device
7) DANCE!

So at first I tried using my favorite parser - SGML parser. But it turned out that SGML parser doesn't handle truncated tags. You know - the ones with nothing in them? With only a start tag that has a / in it and then it's done? Yeah, those. So I had to switch to expat which isn't so bad either.

But enough of that! I'm going to show you what I did that's a good coding practice. The iTunes XML file has several parts in it: a general section that describes the library, a tracks section that describes each track and assigns it a unique ID, a playlists section that describes the playlist and lists the unique track IDs in the playlist.

I wanted to start off by parsing all the goodness of the general library section and ignore the rest while at the same time planning ahead so I would... be able to figure out where to put the code to parse the rest of it as well. To that end I present a random code snippet:


def handle_data(self,text):

if self.current_tag == KEY_TAG:
self.current_key = str(text)
print "Key: " + text
elif self.current_tag == INTEGER_TAG or self.current_tag == STRING_TAG or self.current_tag == DATE_TAG or self.current_tag==TRUE_TAG:
if self.current_parent == LIBRARY_KEY:
if self.current_key in libraryKeyList:
print self.current_tag + "=" + text
self.tempDict[self.current_key] = text

elif self.current_parent == TRACKS_KEY:
pass
elif self.current_parent == PLAYLISTS_KEY:
pass
elif self.current_parent == TRACK_KEY:
pass
elif self.current_parent == PLAYLIST_KEY:
pass
else:
pass

self.current_key = ""


Some explanation: this function handles data inside of tags. It handles key tags specially, but handles tags that contain data (integer, string, date, etc) differently still depending on which section they reside in. So you can see I've written the code that handles the data in the library section but left out handling data in all the other sections. But this is by design: if I wasn't planning ahead I wouldn't have put the if statement that checks what the parent is in that function. I would just have put the code that handles data for the library section without verifying that I was still in the library section - and then it would have handled a whole lot of data in the rest of the file.

By putting the parent key check in there and explicitly listing the different situations that I want to code for I'm doing two things. First, I'm specifying the exact situation I expect this code to run in - putting my assumptions right out there in the code. Second I specify all of the other situations that I haven't yet coded for but want to in the future. I'm using the code to inform myself (in the future) that I need to put code there that does something different. That's the good coding standard.

It can be used in a variety of languages. In Python use the above form but make sure that you put the pass statement in an empty case - otherwise it gets angry. In C you can use #warning directives to produce a warning when you know you'll have to write some code but just haven't yet. Like '#warning Will Robinson, you didn't handle the default case!'

Sunday, July 3, 2011

On Excellence

I fancy myself pretty good at C. Not great, but pretty good. I can find my way around source code, I can write from scratch, I can debug with the most average of them. I'm handy in a variety of ways and I eat source code for breakfast.

This didn't happen in college. This didn't happen in high school. This didn't happen in freaking grade school. I have been programming since I was 6. I started off with AppleSoft BASIC on an Apple IIc knockoff (Laser 128c was the correct answer for those of you playing the home game). After AppleSoft BASIC it was GW-BASIC on the 8088 and QuickBASIC on the 286 and up. But roundabouts high school I decided I had to learn C - because that was the language that grownups used. So I bought a C book (the right one as it turns out - if you want to learn C get this book first), downloaded DJGPP and got to work!

I'm having a computer weekend (putting a computer whose hard drive failed back into working order) so I'm going through old files looking for utilities and just plain reminiscing. I decided to see what my old C code looked like.

Oh God, it's awful.

It's terrible. Here's an example (with some helpful comments from future Steve):
i=-1;

fseek (readfile,0,SEEK_END); //Set starting point to end
size = ftell(readfile); //Find file size
fseek (readfile,0,SEEK_SET); //Set starting point to start

//FS - Seriously? Is there no better way of getting the size of a file?

//FS - Oh god, who gave me malloc?

readfiledata = malloc(size); //Allocate memory for char

printf("Filename is %s\n", argv[1]);
printf("Size is %d\n", size);
printf("Copied %s to steve.tmp\n", argv[1]);

//FS - WHAT?! WHAT?! Index of -1?

i=-1;

do
{
//FS - SERIOUSLY!? Is there no easier way to get all the data in an array? Did you not look?

readfiledata[i] = fgetc(readfile); //FS - OH GOD YOU ACCESSED INDEX -1 OF AN ARRAY!
i++;
}
while(!feof(readfile));


In case you were wondering, I used malloc() and, no, there is no corresponding free() call. I relied on the fact that the OS would free the memory once the program exited. There were variables defined in .h files (no, not extern defines, plain old defines). There were what should have been arrays of constant strings were 'initialized' using sprintf (copy constant to string) rather than just initializing them when the array was defined (as any normal person would do).

And the best part is that the whole program I made basically amounted to a regular expressions parser. All I needed to do was remove images and other formatting from HTML files so they'd be easier to print off and use less ink. That could have been done a lot easier.

This was 1999. So, becoming average takes at least 12 years of constant use of a skill - and I still screw up. Looking at this I can see a lot of myself in new grads coming out of college - the same mistakes, the same assumptions, the same basic design assumptions that end up making bad code (even if it runs). It falls in line with Malcom Gladwell as he writes in Outliers: if you do something for 10,000 hours you'll be great at it. It's not necessarily innate skill, it's practice, practice, practice. It's why I'm a professional programmer and not a professional trombone player - I just code a whole lot more than I play trombone.

So knowing this I can see where new grads come from - hey, they haven't had 10,000 hours of programming, they probably haven't had 10,000 hours of anything engineering related from their college experience. 10,000 hours is 3.5 years at 8 hours a day and that's just for one skill. Engineering is a whole plethora and if you don't know where your career is taking you, why bother practicing one skill over another?

That's my strawman argument - I don't agree with it. My question is - if you know you won't have you 10,000 hours in whatever you want to be good at by the time you graduate college (and by extension, be able to show some really awesome work to a prospective employer) why didn't you start earlier? Did you plan to be average? To be right in the middle of the pack, to not stand out? To be, essentially, replaceable by any other member of your graduating class? Did you plan to go out into the job market and have a big corporation tell you what you should be good at instead of deciding it yourself? Didn't you get into engineering for a reason?

I see people on both sides of this question and you can tell them apart right away in an interview. The people who don't know why they're engineers - the ones who didn't start early excelling at something they loved and wanted to do - come to a job interview and basically want you to tell them what kind of career they should have. They hit the middle of the road for all of their classes - probably picked whatever electives were the most popular because they didn't really care about the difference, didn't have an opinion on what they wanted, didn't find anything particularly exciting and just followed their friends into a class. Their senior projects were whatever they were assigned and they just sort of did them. They don't speak about them with passion, they just wanted to graduate and they needed a project, so they did it. And they're not dumb - a lot of them have 4.0 GPAs for what its worth. But since they didn't know what they wanted to do they never got in-depth on anything. They never really put together the pieces that every single topic in electrical engineering is inexorably linked to every other. Analog circuits mean differential equations, considerations of bandwidth, frequency response, frequency content of waveforms, Fourier Series, linear algebra, matrix equations and any other number of fields of study. The coursework isn't a checklist, it's a symphony of learning. But if you don't have passion it is just a checklist. Mom and dad want you to be an engineer. You're smart, so you do well in classes and you graduate with a high GPA. You go for a job at a big corporation and they grind you into whatever kind of employee/engineer they want. Yay for you - you're average.

But the ones who have passion and drive and love what they've done - they stand out immediately as well. They had a definite plan when they went to college. They'll tell you how they took apart TVs when they were a kid (good Lord it's dangerous - let your kids do it but make sure those capacitors are discharged) or how they wrote dumb little computer games in Visual Basic to entertain themselves. They just won't shut up about their senior project or whatever personal projects they have if they haven't started their senior project. Their eyes light up when you suggest ways they could improve their project ('Ooooh my God... I wish we could go back and work on that more - I'd make it so much better!') or they kept working on it themselves after they graduated. They've learned weird programming languages for fun. In essence, they love what they do and they just won't stop doing it. They don't ask you for direction, they tell you - I'm this kind of engineer and I love doing it, do you need me? And the answer is usually yes, we need you.

So essentially the choice is yours: You can be average or excellent. There is certainly a long road between the two, but you have the choice to take the journey and practice practice practice. And if you find out that whatever you had your eye on doesn't really interest you then fine - move your target, pick something else. Excel at something. Hell, maybe you'll have 10,000 hours of random junk you've practiced. That's okay - it makes you an excellent generalist. Don't just sit around and play video games - ply your craft. I guarantee there's a payoff even though it's a long way down the road. Yes, a very long way. But it's worth it.

Monday, June 20, 2011

Fathers Day

I just read a nice article about someone whose father taught him how to build, use tools, make things. Reading that, it occurs to me that I've never heard a 'My father taught my how to code and I'm grateful for it.' story. It seems like everyone who codes well has been mostly self-taught - a loner. I wouldn't have wanted it then, but looking back now I would have liked to have been taught something so wonderful by my father. I hope we have these stories someday.

That is all.

Wednesday, May 25, 2011

Documentation

People say they hate writing documentation, but what they really hate is Word. And even Word would be okay if no one cared about formatting. Once you have to conform to these corporate styles things get so awkward - oh, you used 11 point font instead of 10, your margins are .05" off, you can't use a table here because it doesn't justify correctly. I've been in peer reviews where the only comments people have are formatting (and spelling errors). It's such an anti-pattern.

Wouldn't it be much better if documentation were like wikis? Where anyone can find the document they want to edit? Where all you need is a web browser to edit it and it's just text? Sure you have to use *'s instead of bullets maybe, or -'s or something, but have you looked lately how many different options you have for bullets in Word? It's insane. I'd rather have one ugly bullet.

So sure wikis are simplistic, but they're straightforward and you get to focus on writing instead of margins. But they don't work for real engineering, right? Real engineering documents are version-controlled, have complicated title pages, fancy diagrams and backgrounds that say things like 'UNCONTROLLED'. Wikis couldn't ever.... or could they?

Step one: version control. Github now has wikis. But Github doesn't just do wikis - anyone can do wikis. Github does version-controlled wikis. Wikis are written in text-based markup: typically Markdown, MediaWiki, etc. But they're all text - just text. Github saves each page you create in the wiki as a text file in a repository separate from the project you're working on. The only non-ideal thing about this whole setup is that all of the wiki pages are stored in one directory - no structure at all. So if you want to create a block diagram for a sub-assembly in a sub-directory you'll have to figure out how to store that information somewhere. I'm considering storing the directory information in the name somehow, but this may be a bit unwieldy.

So you'll end up with a bunch of text files with odd markup stored in a repository separate from your project. Surely there must be a way to take these text files, written with special markup, and turn them into something (dare I say) pretty? Well of course there is - Github takes the text files and creates web pages doesn't it? So yes, it can be done and it will be done. There's an open source program called Pandoc that describes itself as a swiss army knife for transforming markup formats. If you look at the list it can exchange between a lot of formats. Very neat, very useful. Now instead of text files you can get PDFs or... DocBook.

Now with the PDFs you get PDFs that look like nice, printable versions of web pages. Basic but serviceable. But engineering documents from real engineering companies don't just look serviceable - they look complicated. They're full of revision history blocks, referenced documents, government standards and the aforementioned 'UNCONTROLLED' backdrops. You can still do this in this approach but you need a lot more finesse. Enter DocBook. DocBook is used to create... books. You know all of those programming books with different animals on the front? Like this one? If my history is correct, they're all written in DocBook and in fact O'Reilly invented DocBook so they could write their books easier. That's why the all pretty much look the same. That and I guess those folks are boring.

The great thing about DocBook is that it's customizable. The input files are just XML, but the output is usually PDF - just print it off, bind it, draw a fish on the front and you've got a book. Or, if you want an engineering document, you describe some table layouts for revision history, title page, etc, fill out that information in your XML file, transform it and then you've got an engineering document. True, that will be a LOT of work, but so is trying to use Word to do the same thing. Best of all, Git is version control, so your revision history is built-in: you can parse Git commit logs to fill out the revision history section. If your referenced documents are in version control (which would be a good idea) then you can link right to them. And DocBook has all sorts of other neat features built in: automatic table of contents creation, automatic figure referencing with hotlinks, you name it. It's worth looking in to.

Text is great, yes, but we all scream for graphics. The Github wikis can reference documents from your project repository on the Github wiki, so including graphics in the online wiki is not really a problem, but what about on locally-produced PDFs? This might bet hairy. Pandoc has a different format for specifying image links than the Github wiki has. Luckily Pandoc is an open-source project so you can modify it to your heart's content if you so like. I might just figure out something else. So the workflow looks like this:
  1. Draw your tables, graphics, etc in whatever program you use locally.
  2. Use command-line tools (as part of a makefile) to export the local graphics to a GIF or JPG format so they can be included in your documentation.
  3. Save the newly-exported graphics in a common area of your project repository.
  4. Commit your changes to Github.
  5. Write your documentation in a Github wiki and reference the graphics you just committed. This will produce easily-accessible online documentation.
  6. Retrieve the wiki changes from Github to your local wiki repository.
  7. Modify the local copies of the wikis to allow Pandoc to run on them seamlessly.
  8. Run Pandoc on the wiki text files to create either PDF output or DocBook output and copy it to the correct place in your project repository directory structure
  9. If you just want PDFs, you're done. If you created DocBook output then there will be another step to distill the DocBook to PDF after running it through all of your custom stylesheets.
  10. Commit your changes and you're done
Tada! You have professional-looking PDF documentation derived from a wiki and various graphics. And what's great is that most of these steps are automated once you set up the makefiles. The only non-automated steps are actually writing the documentation, making the graphics and creating the stylesheets. Aren't you happy?

Wednesday, May 18, 2011

Tool Vs. Patterns

It's hiring time where I work - mostly new grads. That means lots of confusion. The disparity between what new grads expect and what actually happens in industry is sometimes wide.

For instance, my company might say 'We're looking for people with knowledge of VHDL to program FPGAs'. Plenty of students take a VHDL class or two - so you'd think that'd be a good fit, yes?

Yes... and no. The tool is not the end product. Whatever is made will be made with VHDL, true, but the end product will not materially depend on VHDL being the tool used to create it. In fact you could use Verilog to achieve the same, or even schematic capture. Or even step back a bit: digital logic is fundamentally NAND gates or NOR gates. You can make any digital device from NAND gates - everything from a half adder to an iPod. But you won't get a job anywhere just because you memorized the pinout of an SN7400 IC - the important part is what you're creating moreso than how. Fundamentally, being familiar with common patterns used in your field of choice is the mark of a good developer moreso than a strict adherence to syntax.

I'll illustrate with an anti-example that's more in my line. Let's talk C code. 'But it compiles!' has been long the defensive cry raised from many a young coder to defend their petty messes. Their unspoken assumption is that adherence to syntax is the sole qualification of a good C coder. This is patently untrue and a prime example is the anti-pattern of placing all of your code into main(). We've all done it - in fact it's the first thing you learn in class because they haven't taught you function calls yet. But is it a suggested practice? Not in the slightest. It produces code that's difficult to read, difficult to control (requiring the use of many gotos (warning: considered harmful) and just in general a pain. Of course the proper way to write code is the use of function calls to compartmentalize functionality and efficiently reuse it. The only people who deny this are defensive greenhorns more concerned about compiler errors than writing maintainable, readable and efficient code. Knowing syntax is a necessary condition for being a good programmer/engineer/developer, but certainly not a sufficient condition.

What about a straight example of why you should pay more attention to patterns than syntax? Something that looks awful but is brilliant, effective and clean. I present to you Duff's Device:

send(to, from, count)
register short *to, *from;
register count;
{
register n=(count+7)/8;
switch(count%8){
case 0: do{ *to = *from++;
case 7: *to = *from++;
case 6: *to = *from++;
case 5: *to = *from++;
case 4: *to = *from++;
case 3: *to = *from++;
case 2: *to = *from++;
case 1: *to = *from++;
}while(--n>0);
}
}

(Code formatting is always iffy on this blog so you'll have to forgive me.)

What manner of insanity is this? It's truly mind-breaking when you first look at it. So much so that when I show it to new grads/students the first words out of their mouths are more often than not 'Well that won't compile'.

And at this point I get an evil, evil smile on my face. They've shown their hand - more concerned with syntax than patterns. I assure them that it does and the response follows 'But it's not right!' Oh but it compiles! So it must be right, no? Oh the joy I have at this point - I've defeated a mere child in a battle of wits (Ok, I'm not a great person). None of them attempt to figure out what's going on - which is sad. Because once you understand the purpose and function of this code you recognize at least two patterns right off:
*to = *from++


This is your basic memory copy with a twist - the destination address isn't incremented. If you're really smart and had a focus on embedded systems you might realize that's because it's meant for a memory-mapped I/O register so the address won't change between writes.

switch(count%8){
case 0: do{ *to = *from++;
case 7: *to = *from++;
case 6: *to = *from++;
case 5: *to = *from++;
case 4: *to = *from++;
case 3: *to = *from++;
case 2: *to = *from++;
case 1: *to = *from++;


You might recognize this as loop unrolling. Loop unrolling is used to minimize the overhead incurred from jumping around in a loop: instead of doing one thing n times you do four things n/4 times. Useful.

Go ahead and read the full explanation of Duff's Device. If you can follow, you'll see why it's an amazing piece of code and why it vindicates the idea of patterns. Tom Duff understood assembly programming and knew how to implement loop unrolling in it - he was familiar with the pattern. He wanted to do it in C but couldn't think of a straightforward way so he abused the hell out of C syntax to make it work. It makes you want to cry for joy and in pain because it looks like that poor, efficient program is being tortured to death.

This is why patterns are more important than syntax. Patterns are like tools: if you're working too hard, chances are there's tool to help you be lazy. And new engineers usually work way too hard - most often at reinventing the wheel. They'll figure out some complicated method of ensuring single-access to a variable because no one taught them about semaphores. Oops. Patterns help us write efficient and correct code because they are distilled knowledge - the lessons of programmers before us given form. It pays great dividends to know about them.

That's what 'x years of y experience' on a resume means: I know a lot of patterns, I have a lot of tools in my toolbox, I won't make stupid mistakes because I'm past that. If knowing syntax is intelligence then knowing patterns is wisdom (go D&D!). If you don't have both you will never be as awesome a programmer as Tom Duff.

Monday, May 2, 2011

Offering Advice

I spend an inordinate amount of my free time on a site called Chiphacker. It's a Stack Overflow for electronics, embedded software and general EE nerdiness. Sometimes you get wonderful questions like this one:

Hey guys, I'm trying to properly bias the LM34476U chip (datasheet here) for operation in the saturated MODE. However, I want to be able to dynamically change the bias to on mode when my signal source changes. I've sketched out a design using the MAX4640AB 4:1 multiplexer that will switch out bias resistors but I'm getting a split second where both resistors are conducting and the feedback causes ringing in my output that I can't get rid of. Can anyone recommend methods to reduce the bandwidth of the LM3447 to eliminate the ringing? I've checked the app notes but the method they suggest there doesn't apply directly to my circuit topology and I don't know where to connect the feedback capacitor with my setup.


Wow, just wow. It's questions like that that I open up my circuits text book for.
Literally - check out this question here. I was even nice!

But sometimes there are questions asked that... spawn blog posts. That spawn stand up comedy routines ('Hey guys did you hear about the guy that asked a stupid question on the internet?'). Questions asked by people with obviously little knowledge of engineering, analog design, programming, digital logic, etc. Out of respect for the guilty I will not post one of the questions here but instead show you a creation of my own twisted mind that makes me approximately as angry as the real thing:
Hey guys, I've just had a great idea that I'd like to follow up on but I don't know where to start so I was hoping you could give me some pointers.
It seems to me that a lot of disabled people have problems with picking things up, so I want to build a robot that will roll around and pick up the things that they need picked up and you can control it with voice commands.
I've got a lot of software experience with VB6 and I've been reading up on the PIC so I think I can write the code for it when I find an implementation of the .NET framework for the PIC.
What I don't know about is the motors and stuff and the voice control maybe. I need the robot to roll around with wheels and its hand has to be just like a persons hand so it can pick everything up.
It also needs to know what the words for things are and it should probably understand if you point at something too.

My cousin is disabled and he has such a hard time... I really want to help him and think this is a good idea. Do any of you guys know a website or book that can help me figure this stuff out?

Thanks guys, I know together we can work this out!


This just makes me angry. Doesn't it make you angry? The first thing that pops up in my head is that this isn't really a question. If you had to distill it it wouldn't end up as 'Is this possible?' or 'What will be involved in doing this?' but instead 'Tell me how to do this'. It's like saying 'I want to get into Heaven - tell me how to do this.' Man, you'd be in for quite a discussion let me tell you....

But what are the specific ways I hate this question: Oh, let me count them:

Hey guys, I've just had a great idea that I'd like to follow up on but I don't know where to start so I was hoping you could give me some pointers.


First off, I hate it when you use any honorifics ('Dear sirs, I am many problems with this software having! For much to help please!') or really, any form of address. Don't call me 'guys', don't pretend like you know me unless you post regularly. Yes, I do check. Secondly the phrases 'I don't know where to start' and 'give me some pointers' indicate this person has no idea how to do what he wants to do. At all. Not even close. When the last time you had a great idea didn't you at least have some clue how it might be done? You think 'I want to know when I should stop pulling my car into the garage' and then immediately assume that this is a question you shouldn't even consider - that you should go consult a professional? Even if you start thinking about it and conclude you don't have enough specific knowledge to formulate a final solution that at least leaves you with some specific questions to ask. If for some reason you decided it needed to be electronic you'll quickly reach the question 'How can I measure distance electronically?' Then you ask THAT question to the professionals. Saying 'I don't know where to start' means you didn't bother to think through it even a little or you honestly don't understand what you want.

It seems to me that a lot of disabled people have problems with picking things up, so I want to build a robot that will roll around and pick up the things that they need picked up and you can control it with voice commands.


This boggles my mind. Do I want to help disabled people? Yes. Do I understand some of the issues that may be affecting them? Yes - apparently they 'have problems with picking things up'. Could I design an apparatus that might help disabled people pick things up - via voice commands? Perhaps - but it would take years. You see, I'm a simple embedded software engineer (before this I was a controls engineer - before that an electrical engineer and before that? Farmer) and the request you just made involves at least four different fields of study (Mechanical engineering, embedded software development, computer science with a specialty in artificial intelligence, digital signal processing, analog design - all of the top of my head) and would require someone with a college education in each. I'd probably need one PhD as well - just for good measure. It's not as if I could do wonderful things to help the disabled but I just get drunk instead. This is HARD.

I've got a lot of software experience with VB6 and I've been reading up on the PIC so I think I can write the code for it when I find an implementation of the .NET framework for the PIC.


Okay... this means you don't understand any of the potential tools you might be working with. Do you guys know what the PIC is? An 8-bit microcontroller that can't find its ass with both hands. Actually, scratch that - the PIC only implements one hand in hardware and requires you to emulate the other hand in software if you want to access it. This thing is dumb as sin. It most certainly doesn't have an implementation of .NET and even if it did the first thing he'd probably try to do is make 'Hello World' using console.writeline or whatever prissy function they use because they're not man enough to call it printf. You have to work with bits when you use this thing. Software people don't even know what bits are anymore - this guy is way out of his league.


What I don't know about is the motors and stuff and the voice control maybe. I need the robot to roll around with wheels and its hand has to be just like a persons hand so it can pick everything up.
It also needs to know what the words for things are and it should probably understand if you point at something too.


Oh okay, what you'll admit you don't know about is maybe 3/4's of the entire thing? So at the very least it's only ten years of school? Do people seriously think that these matters are trivial? Many legitimate college graduates are simply lost as far as practical skills in their first job and even then a mechanical engineer (for instance) isn't a machinist - he may be able to design the mechanical parts but he can't make them. You need someone who works with his hands and hopefully still has all of his fingers - that means he's good! And I'm not even going to get into how difficult it is to 'make it understand words'. Suffice it to say the most advanced processing hardware in the world takes years to learn a language and usually doesn't learn it correctly anyhow.



My cousin is disabled and he has such a hard time... I really want to help him and think this is a good idea. Do any of you guys know a website or book that can help me figure this stuff out?

Thanks guys, I know together we can work this out!


Oh of course - a book! Or website! All he needs is a website - one that tells him EXACTLY WHAT TO DO. Every step from A to Z to make this magical device that barely exists in his mind because he hasn't thought it through very much. Yes, I wrote that webpage - it's on Geocities. I dare you to find it. Face it - even if such a thing exists someone's selling it and they're not going to tell you how to make it. They're not going to document it completely and put it on the web for you to reproduce so you can not pay them anything. People like their effort to be rewarded - and this is effort

People will try to defend this person. 'But Angry', they'll start, 'he just wants to know if it's possible!' Screw that - everything is possible. Period. It all depends how much you want to pay. And I can't tell you how much it will cost. Do you realize that real companies have people who are paid - full time, real money - to estimate how much jobs will cost? And (call me cynical) when you tell him that the price for development of this miracle is in the millions he'll say 'I don't understand why it needs to cost so much!!! You can't be for real!' You see, he has already demonstrated that he lacks the ability and knowledge necessary to judge the difficulty of the ideas he presents. Remember that as far as he's concerned he simply hasn't found the right website to tell him how to do it. After that it's easy.

'He just wants to know what it will take!'

Seriously? Remember how he can't tell whether he's asking for something reasonable or something insane? When I tell him it's something insane guess what? That's not the answer he's looking for. He has every reason to ignore me and perhaps ridicule me. I can't say for sure whether he has a rigid mindset that won't listen to reason but I am saying for certain he has no facility to judge whether something is reasonable or not. So why even ask? Why not ask 'Am I way off base here?' instead of 'Show me how to do this'.

'You should at least be nice to him'

Well... I could. It's true. I don't have to do anything - so many questions simply remain unanswered, bereft and eventually die in obscurity. It feels so fulfilling to see no answers, no comments, no nothing on a question months after it's been asked. But I have my limits. I am the Angry EE for a reason. And that reason is that some things I just can't let go. On the internet anyhow - where all I have to do is type. Otherwise? Waaaay too lazy. But I will not refrain from essentially answering his question in the only way one paragraph can. I am not on retainer for more than one paragraph - no one on stackexchange sites should be. It will be pithy, it will be a little acerbic, but it will be right. If you choose not to believe it, I will follow up with comments that make fun of you more and more.

The moral of the story is to do work for yourself. Do some, any! Then you can ask more specific questions - questions which I will be less hesitant to answer than blanket questions about topics you obviously don't understand and haven't thought about. I answer lots of questions on this site and there's plenty of good ones which means that plenty of people don't expect me to do their homework for them. I like those people. Yeah, don't expect me to do your homework. Homey don't play dat.

(Does anyone get that reference?)

Saturday, April 23, 2011

THE FUTURE

For me, this weekend I realized that I was living in the future.

I've always been bullish on the future. I read a lot of science fiction and science fiction loves the future Time travelers from our time go into the future for some reason and they see all of this cool technology: warp drives, transporters, ray guns, nanoconstructors - you name it. Science fiction writers love science but most of the time they skimp on the social aspects of the future. Do we have racism? Is there free love? Are drugs legal? Is religion abolished?

Sci-fi writers are often less eloquent on these matters. Roddenberry wasn't. He showed us a world in stark contrast to the one that was developing around him. Racism: gone. Freedom: for everyone in every way. No one is sick because we all decided that wasn't going to happen in our world. No one is hungry because not a single person on Earth could bear to see a person starve. There's no war because we're way too grown up for that. There's no money because we're all so rich we don't feel the need to measure it anymore.

But the uniforms he put his characters in were absurd.

Seriously, go-go boots? Miniskirts? Captain Kirk is pretty much wearing pajamas. I mean take a look at military uniforms in our time. They've changed only somewhat in the past 300 years. They're full of creases, corners, polish, buckles and complication. Uniform maintenance is the primary method of making grown men cry in the military. I don't foresee a general relaxing of military culture to the point that you can chill in pajamas on the bridge of the flagship within Star Trek's timeline.

Specific predictions about the future can get tricky. Very few people predicted even really obvious stuff like computers and cell phones. And clothes are worse. Fashion is unrecognizable year to year. The best rule I've figured out is the 20-year rule: whatever you're wearing now will be stylish again in 20 years (assuming it is currently stylish). It works well enough in a general sense, but I'm also quickly losing sense of what is fashionable right now anyhow.

Which is why I was blindsided this weekend. We went to the beach (note to everyone in the frozen north: it is simply gorgeous out here and the water is plenty warm). I saw a bunch of surfers - many of them much much younger than I. And they were ripping up the waves and...

Wearing war paint.

Yeah, streaks of red, blue and yellow all over their faces, arms, chests, you name it. They looked like they belonged in Braveheart. I don't normally attempt to fathom the culture of kids these days but this one left me curious. The only explanation I could think of turned out to be the right one: it's sunblock.

And of course it's sunblock. Let's do the math: you need to wear if you're going to be out in this sun and it pays to be generous with it. When I was bailing hay in summers on the farm I used pure zinc oxide - creamy, greasy and great at picking up dust and dirt. But you had to use it - normal stuff wouldn't get the job done and even with the zinc oxide you had to slather so much on that it was opaque. I looked like one of those nerds you see in movies who never goes outside - pure white nose peeping out from behind a wide-brimmed hat and long sleeves in the middle of summer.

Oh if only I could have looked cool AND gotten sun protection....

Oh, and now in THE FUTURE you can. You can cover yourself in sunblock and look like a warrior instead of a nerd. This is the sort of great idea that I never had or even considered, but seems so foreign when you suggest it:

"In the future," the traveler started, "the kids wear war paint!" The pitch of his voice raised on the last half of his sentence as if he had made a particularly incisive statement. "They go to their sports games all painted up with their team colors, fancy designs on their faces to intimidate the other team - who of course have their own paint. Some wear it for every day pursuits too - go to the store and you might see some of them there."


How odd! How interesting! How different! THE FUTURE! Of course, in this case the future isn't foreign or odd. If you start from certain suppositions (you need sun block, sometimes you need it really thick, but it looks dumb) then it's not a far leap to the conclusion (non-dumb-looking sun block should be invented).

Yes, THE FUTURE has arrived for me and it's a good thing: someone had an idea that seemed foreign and senseless to me but is in actuality insanely great (and it's making the world a better place). Kids are wearing more sunblock, protecting their skin and stopping cancer - but they don't look like idiots. Yes, in THE FUTURE the right thing to do is also the cool thing. I want more future please.

Wednesday, March 16, 2011

Overflow Overflow Overflow

For the second time in the past few weeks it turns out I'm my own worst enemy. I work for days to track down a bug in my latest embedded application only to find out that (surprise) the jerk who put the bug there is none other than me.

Some background: the ARM Cortex M3 has this nice interrupt vector called FAULT. When something bad happens (divide by zero, memory protection error, etc) the program packs up the current processor state and vectors directly to the fault interrupt handler. This is a very good thing - you can put a fault handler in that interrupt, resolve the problem and continue on your merry way. Perhaps. The thing about faults is that they tend to be somewhat hard to recover from. For instance, let's look at my two most recent faults I've had to deal with.

The first fault was a stack overflow. I know that some of you have probably heard of a stack, and some of you have heard of a stack overflow but if you're anything like me you had no idea what this actually meant and simply assumed it would never be a problem for you. Well, it most likely will be for you at some point. It's like the fraternity hazing of the embedded world: you can't consider yourself an embedded programmer until you've had your first stack overflow. Let's have some background: A stack is a Last-in-First-Out memory structure that is essentially scratch space for your program. Whenever you call a function the return address is pushed on the stack as well as function arguments. Inside the function the arguments are popped off and used, and when the function is done the return address is popped off and jumped to. You can use the stack directly but it's not typical to do so if you're programming in a high-level language like C - just let the compiler do the dirty work. Now the stack is stored in memory just like everything else (you'll see in a second why this is at the root of the problem) and it has a specific length. So it may start at memory address 0x20001E0 and if it's 16 bytes long then something else starts at location 0x20001F0. So you've only got 16 bytes to play with for all of your function arguments and return addresses and such. In a VERY simple system maybe this would be enough but it's very doubtful. For instance, if you call a function within a function you have to push a lot more stuff onto the stack. If you call a function within that it gets worse.

Now here's the tricky part - even though the stack is special and unique and essential to calling functions and making your program work correctly your processor doesn't care. It can't really tell what memory is stack and what isn't. So if your stack is 16 bytes long and you push 17 bytes on to it your processor will happily obliterate whatever data was residing just after your stack and replace it with what you just pushed onto the stack. This is stack overflow.

What happened to me the first time is that I didn't allocate enough stack space and it wiped out some important information residing just beyond the stack. Now if what is just beyond the stack is data, then overwriting it is bad, but not fatal for the program. After all, data is data. If I'm adding two numbers and one of them is overwritten by the stack then I can still add it, but the result is unimportant and nonsensical. What's worse is when pointers get overwritten. If you don't know pointers... then you probably have lots of company nowadays. But it's simple: pointers are memory locations. You interpret the value in a pointer as pointing to a memory location. So to load data from a pointer is a two step process: read the memory address stored in the pointer, then look at that memory address and grab the data there. However, if your stack has overwritten this pointer then it might point to the wrong memory address or (more likely) not point to memory at all. You see, RAM on the Cortex M3 (the one I'm working with) starts at 0x20000000. So if a pointer tells you to look at 0x1FFFFFFF then it's not just telling you to look at the wrong memory address - it's telling you to look at something that's not even memory. If it tries to do that it triggers a hard fault and you get to figure out why. If it's your first stack overflow then this process lasts days. Enjoy!

So that's one type of overflow that can hurt you. The second is buffer overflow. There's no magic here either - a buffer is basically an array. Arrays are bounded - they have a definite size. But even if you declare your array to have a size of 512 your compiler won't stop you from requesting the 513th element - it picks out the memory just beyond the end of your array and reads it. And you can write to it just as easily. And of course, there's usually something important there that you really shouldn't overwrite.

Arrays can be difficult to work with, so people create circular buffers. Normal arrays would be linear buffers: start at zero and go to the end. Circular buffers wrap around so that if you try to go outside the array you loop back around to the beginning instead of trailing off the end. That is, they loop back around to the beginning if you code them correctly. Tell me what you think the result of this code is:

head_pointer = (head_pointer + increment_value % buffer_size)

If you said the modulus (this thing: % ) operation would be evaluated first - you're right! And you're smarter than me! That code doesn't do what I wanted it to do: increment the value then wrap it around if it was greater than the buffer size. Instead it wrapped increment_value around if it was bigger than buffer_size and then added it to head_pointer. The net effect of this is that the pointer never loops around to the beginning of the buffer - it just keeps growing and growing and growing and gobbling up memory as you write to it. If you let it go on long enough it will overwrite a pointer with gibberish and trigger a hard fault.

Yes, I did this. At least this sort of stupidity on my part gains me valuable experience. Lesson learned: I am very foolish sometimes.

Thursday, February 17, 2011

C Gotchas

Holy crap, two updates in one day? Yeah... I decided that if I find myself sitting at my computer thinking 'I wonder what's on Slashdot' or 'Do I have any more street cred over at Chiphacker?' I decided I should do something potentially useful and update my blog. Doesn't have to be long, doesn't have to be good, dosen't ahve to ahve correct spellign - just do it. After all, the first step to making money with a blog is to update it every day. Second step? Have people actually read it. Step three profit baby!

But I'm sure you're all here for the real meat of this post and that is going to be the answer to the question 'What stupid thing did Steve do today that cost him hours of time?' It didn't take me hours, but here's a snippet of code that caused me trouble. By the way - you win $10,000 if you spot the bug and submit it before I hit 'Publish' for this post.

#define VALUE 0xF01FUL

short i = 0xF01F;

if( VALUE == i)
{
print("They are equal\r\n");
}
else
{
printf("They are not equal fool\r\n");
}


Ok hotshot, what prints?

If you said 'They are equal' you are in fact wrong. I hope that feels good. But do you know why you are wrong? Here comes the science.

We have two things being compared here: i is a short int. On most processors/architectures that is a 16-bit signed integer. The #defined value is in hex (obviously) and supposedly the same value as the variable but has a little 'UL' on the end of it. That signifies that it is to be treated as an Unsigned Long variable. This corresponds to the unsigned int (32 bit) type on most processors/architectures.

That might already give you the first inkling of why these two aren't equal: one is 32 bit but the other is 16. But you veterans out there (if you consider having taken an introductory C class in college being a veteran) will think 'Ah, but those bottom 16 bits are the same, so it shouldn't matter!'. You would be right, but C doesn't follow your rules. In C all integer comparisons are done on a 32-bit basis. Basically, C expands every integer used in a comparison to 32 bits to determine if they're the same.

'Aha!' you say with a sly smile, 'I was right then! Even if you expand them both to 32 bits they're padded with 0's in the upper unused portions (obviously!) so they come out to the same thing!'

But once again you're incorrect in your assumptions. Why oh why do you assume that they're padded with 0's? Because the only other option is to pad them with one's and that would change the value? You obviously forgot how signed data is represented on a computer! In signed integer types in C the most significant bit is the sign bit - if it's 1 then it's negative. Seems simple enough. So let's follow your line of thinking and expand our (signed) variable to 32 bits:

16-bit value: 0xF01F
Expanded to 32 bits: 0x0000F01F

Wait a darn tootin' second! This was a negative number (the most significant byte was F which means all 1's which means the most significant bit was 1 - which means negative). Now that we've expanded it it's suddenly.... not negative. Well that can't be. It's not the same value then - positive vs. negative. Kind of a big change. So to preserve the value we'd have to expand it and pad with 1's - like this:

16 bit value: 0xF01F
32 bit value: 0xFFFFF01F

Let's check my math with a signed integer calculator that you can find online (via the Google: http://planetcalc.com/747/):

0xF01F: -4,065
0xFFFF01F: -4065
0x0000F01F: 61471

Yep... padding with 0's doesn't produce the same result if the integer is defined as signed. So where you see

if(0xF01F == 0xF01F)


C sees:

if(0x0000F01F == 0xFFFFF01F)


And then it looks at you funny for thinking they're the same.

But I don't look at you funny. I'm only so dismissive and rude because I just made this mistake today and the pain is still fresh. Someday we'll laugh about this.

But for now if you mention it I will end you.

Quick Debug Tip!

You've probably been told you should always read returned error code from functions - especially if you're working with a new API. It's too easy to assume you have everything working because it compiles and then it all falls flat when you try to run it. But the question is what do you DO with the status? It's not always clear. There usually a lot of them and often you won't be handling most of them in a release configuration (hopefully you will have learned how to avoid most of them by the time you release). Some are potentially ill-defined (how many times have you seen a code like ERR_UNKNOWN returned three or four different places in one function?). But what you should always do is read the code and check it - like this:

status = api_function(args);
if(status != ERR_OK)
{
//Apocalypse?
ERROR();
}


This is good practice - always do this. Even if the if statement is blank, still do it. Just get your hands into the habit of reading returned error codes and checking them.

But you may have noticed I put something called ERROR() in there. That's a placeholder for a real error handling strategy. Just start by defining it as a macro (Note, this may not be correct, it's early and I'm not in my right mind):

#define ERROR()


Now it exists but it doesn't DO anything. This is a fancy way of putting nothing inside the brackets but still reminding yourself that you have to do something later. If you do this for every returned error code then you will have a hook in place to do something if a proper status code isn't returned.

Now depending on what point in development you're at and what kind of system you're running you have several options. While still debugging I find it easiest to just define the macro to be something like this:

#define ERROR() disable_ints();for(;;){wdt_pet();}

This will entirely block the program (including interrupts) while simultaneously petting the watchdog timer so it doesn't restart the processor (just in case the watchdog interrupt isn't maskable on your processor). If you don't have a watchdog timer you can ignore that part. This approach works best when you have some sort of debugger. You start the program, wait a second and then pause execution to see if it's stuck in any of these loops. This approach also works in a multi-threaded system assuming that your scheduler runs in a maskable interrupt. When you release this code you should remove that macro so that your system doesn't hang out in the field for an ignorable error.

For a more advanced approach that you might actually want to use in a release environment you can potentially define different levels of error such as ERROR and FAULT. FAULTS would obviously be more important and warrant more attention while ERRORs might simply be counted and then ignored. Most of the time dire errors can't be handled locally, so your only option is to report it to the operator (if there is one) and usually his/her only option is to hit reset and hope everything goes back to normal. But at least there's a process!

There are other interesting wrinkles in this error handling game. The ARM Cortex M3 for instance has a fault interrupt that is called whenever something bad happens (try to access memory outside of RAM, divide by 0, wear white after labor day, etc). It pushes the state of its registers, stack pointer, program counter, etc on to the stack and then visits the interrupt. You can use the information it saves to create a report (because sadly most faults that force a visit to the interrupt cannot be recovered from without a reset). The processor you're using may have similar error-handling features. Take a look.

To summarize - always check returned error codes. Even if most of the time you can't DO anything with them you can at least hang the program so you know you have a problem to fix. You might be able to get fancier later but as my favorite super-national paramilitary group used to say - Knowing is half the battle!

(Actually that's crap - they were all-American when I was growing up and I'm too old to change now! Get off my lawn! GO JOE!)

Tuesday, February 15, 2011

What are you doing?

Stop! Right now. What. Are. You. Doing?

Wow, let's stop that, I felt like a telegraph there for a minute (STOP). But the question stands: just what do you think you're doing? I intend this post mainly for people who have stopped developing software/hardware and have taken a break to absorb my acerbic wit. If I stopped you in the middle of enjoying a bowl of ice cream please believe me when I say I did not intend that you should question why you're putting it in your mouth. There's a good reason for that: it's ice cream. Duh.

But to all those coming for some witty banter - fresh from a break from developing the latest microcontroller-inspired widget - let me ask the question again. Just what do you think you're doing?

Probably your answer is going to be 'coding' or something similar. Good. Microcontrollers need code - that's obvious. But let me ask you this: how certain are you that the code you're writing right now is the code that's going to end up inside of that microcontroller when all is said and done? Uh, analysis? Maybe 15% certain I'd say - depending on what point you're at in the design. If you're early on in the design your chances are closer to 1%.

This is not your fault - well, that's a lie. It probably is your fault - but I'm trying not to scare you off. After all, this happens literally to everyone. Everyone. No one goes through a project and doesn't have one of those moments where they realize they have seriously miscalculated the scope of their project, or the simplicity or realized that they forgot about some other major hurdle. Then they end up decimating their code - and not in the literal sense where 90% of it is left after. No, more like 1% is left - and that's probably a header.

I typically see this problem because people don't consider whether their code actually works. They read specs, requirements and other documentation and then write a lot of C code. Or Java, or Python or whatever. It's all the same - almost literally because as I said 99% of it will typically be gone by the end of the project (so it makes little difference what language it's in anyway). Sure, it probably compiles - with only a few dozen warnings (It's all small stuff - it doesn't affect how the program works. Some casting will fix it, it's fine!) but there's no telling whether it actually does what is expected of it because you didn't set up benchmarks, tests or sanity checks on anything. Your development process goes something like this:

Read
Code
Code
Code
Code
Code
Delete 'old' code
Wish that you had used source control because that wasn't old code at all
Code
Code
Cod (not a misprint - you're enjoying fish at this point)
Ode (poetry break)
Code
Code
Code
Integration
Blank stare
Blank stare
Disbelief
Delete 99% of code
Recode
Recode
Etc

And let's be honest - if at any point in the design process you stop to think about it you're going to think 'Man, integration is going to be a bitch.' There's no project for which that isn't the case - integration is always difficult. But there's a way to make it easier - don't save integration for last.

Why is it that when you work on something for some reason project management always seems to think that it's their job to keep developers apart for as long as possible? It's probably because the Waterfall Model says that development and integration are two separate phases and one (development) is a prerequisite for the other (integration). So no skipping ahead to integration! What will project management do if not enforce the flawed and ultimately unhelpful vanilla implementation of the Waterfall Model on the poor helpless engineers under its command?

Of course not everyone blindly follows the Waterfall Model, or eXtreme Programming (seriously, the initialism is XP, not EP, so I capitalized the right letter there) - it certainly isn't the case where I work. No, what you need to do is integrate as soon as possible.

A project typically consists of several independent parts which can be integrated and tested without bothering the other parts. Projects usually also consist of several pieces of hardware, code or technology you've never worked with before. Let's be honest - when was the last time the new chip you used followed its own datasheet exactly? And for that matter, when was the last time two engineers working on opposite sides of a communication channel, attending the same meetings, reading the same documentation and potentially being in the same love triangle decided to implement their portions of a project in a compatible fashion? These aren't signs of immature engineers or bad project management or difficult documentation - it's just life. These things happen. The difference between an inexperienced engineer and an experienced one is basically how jaded he/she is. Optimism is not a useful trait when everything is contractually obligated to go wrong.

Given these problems it makes no sense to write literally everything and then go back and make sure it actually works. Here are several suggestions:

Use unit test for complex algorithms to catch rookie mistakes such as off by one errors (we all make them).

Walk over to the other engineers office/cubicle and ask copious amounts of questions. Nine times out of ten on a project the arbiters of what actually gets made are the engineers themselves. Just to get everyone on the same page it helps to ask of your fellows 'So what CRC method are we actually using and how does it work?' Then document it.

If two chips have to talk to each other chances are they will hate each other and refuse to speak. Start early with actual chips and force them to get along as soon as possible.

Abstract away interfaces so you don't have to worry about specifics in unrelated parts of code. You don't have to know whether your serial interface actually works if your application's only method of accessing it is a ring buffer.

Verify all assumptions as soon as possible. Chances are you're wrong (life just hates you like that).

This is hardly an exhaustive list but I think you get the idea. In case you don't get the idea I'll state it as plainly as possible:

Code isn't useful unless it works! Don't sit there with questions, concerns and unverified assumptions but pop out 1K lines of code a day. In a month you'll be left with 500 lines of good code and a looming deadline. Be pragmatic, be wary and be prepared.

Note to anyone who is reading this who may actually know me and/or work with me: I am not vindictive, frustrated or lacking empathy for this situation. Believe me I have been stuck in it plenty of times and it was all my fault. But I will be the last person to berate you and the first person to stick up for you in a meeting or directly to your/our boss. Sure, this may be a rookie mistake but we're certainly all allowed them. If we're not then we have no opportunity to become better engineers.

Thursday, February 3, 2011

Terminology Galore

If you're anything like me you hate terminology. You know, those special, magical technical words that people use that you don't know the definition of. Terminology. It'd be great if it weren't so imprecise. You'd think (well, hope) that a word has one definition. This is not even close to the case with regular English (and even worse with British English) but can't one hope for a more direct mapping from technically-minded people. A word should just mean one thing, right?

Take the word 'driver'. On your desktop PC you have drivers - all kinds of them. On an embedded system you have drivers - all kinds of them! But they're not the same kind of drivers - not exactly the same anyway. If you wanted to fill a job writing Windows drivers you might not want to fill it with someone who write embedded systems drivers or even Linux drivers. So if you saw such a job advertised you'd want to make sure what kind of job you were getting into.

So some recruiter calls you and asks 'Do you have driver writing experience?' And you really want to ask what he means but you know he doesn't know. The best answer you're going to get is "What kind of experience do you have?" I would respond with something like "I've written multiple device drivers for bare-metal microcontrollers and real-time operating systems, is that what you're looking for?" And if you're lucky the notes he writes about your experience will be something like "bear-metal.. big iron? Iron Man? multiple operati.. operation systems. Operation - I loved that game...." And what he tells you is "Absolutely absolutely, I'll get in touch with them and let them know. That's great that's great!"

And that's only the answer I would give now - because now I have some idea what a driver is and how it isn't a board-support package or hardware abstraction layer (I think). But if you're anything like me a month ago you're a bit lost. You see I had developed drivers before (I think), and just didn't know it. So let's define some terms!

I consider a driver (in an embedded system) something that hides registers for you. For instance, here's some code that configures a timer on on MSP430 for creating a servo control pulse:

//Clear timer A config

TACTL = 0x04; //TACLR = 1


TACTL = (0x02 << tassel =" 10">
(0x00 << id0 =" 00">


TACCTL0 = 0x0000;


//Configure compare and capture unit 1 for output compare mode 3
TACCTL1 = 0x0000;


TACCTL1 = (0x00 << cap =" 0">

(0x03 << outmod0 =" 011">



//Set CC1 to generate 1.5ms pulse - neutral

//MSP430 user's manual page 11-14

//?? what's going on here?

TACCR1 = 0x0000; //SET output line HI at 0x0000


TACCR0 = PULSE_1MS; //RESET output line (LO) at 1.5ms



//Set timer A period to 2ms

TAR = TIMERA_PERIOD;



//Start timer

TACTL |= (0x02 << mc0 =" 0x01">

(0x01 << taie =" 0x01">



Thats... a lot. A lot of bits. A lot of hex, a lot of OR'ing. A lot of bad formatting. Oh my, I can't handle this.

I'd rather do something like this:

timera_conf(SRC_SMCLK,DIV_1);
timera_cc_conf(FUNC_COMP,COMP_MODE_3);
timera_cc_setpw(1500 /*us*/);
timera_interrupt_enable();
timera_run();

See? No bits. That's a driver.

Now, this is an internal peripheral. Those are easy. Well, easier. We know that drivers basically set registers. When it's an internal peripheral then accessing those registers is just as easy as saying 'register = value'. But it's harder if you have (for instance) a peripheral connected over SPI. You still have to set registers but that requires you writing data over SPI - usually commands like 'I WANT TO WRITE TO THIS MEMORY LOCATION. IT'S A CONFIGURATION REGISTER YO' and then the peripheral responds 'YO DAWG THAT'S COOL WHERE THE DATA AT?' and then with another SPI transfer you say 'HERE DA DATA AT!'. So basically you'll have a peripheral driver utilizing the SPI driver. It's a whole bunch of driver on driver goodiness.

So what about all the other crap? Like a Board Support Package. Well a BSP... supports a board. For instance the ez430-USB development kit has one LED on it (this is the extent of its on-board peripherals). It's located on P1.0 which is on physical pin 3 which can be accessed on port (yadayadayadayada). You don't want to know all of that - you just want a heartbeat LED. You want it to flash. So you have a smart guy write a function for you - a BOARD SUPPORT FUNCTION!

void bsp_led_toggle( void )
{
P1 ^=0X01;
}


This is great - I don't have to know where the LED is, I can just say 'toggle that please!' and it gets done. That's board support - it supports the board. Whatever's on the board needs functions so I don't have to know all about it.

And what about the dreaded HARDWARE ABSTRACTION LAYER?!?
The HAL just makes sure you don't actually need to know what your hardware looks like to use it. For example, you can turn a general-purpose I/O port into a TTL serial interface - you just have to be careful with timing and such but it's certainly possible. Now imagine on your Arduino you have the regular UART and a software-based UART. You want the same interface to both: get a byte, put a byte, turn it on, turn it off. So you write up a bunch of functions and then you just say something like:

byte = uart_get(fake_uart);
byte = uart_get(real_uart);

Same interface, same bytes, different underlying hardware. That's what a hardware abstraction layer does.

Hopefully with some of these definitions you'll be a little more educated about what all these weird definitions are. Good luck!

Sunday, January 16, 2011

Desk Update

I mostly finished my desk/workspace. I think it looks good. You can see pictures of it here: http://picasaweb.google.com/sfriederichs/DeskPics?feat=directlink

I totaled up the cost for the materials. Not tools or time, just materials. The new things I bought came out to about $340, but I also used some wood from my old desk. I'll figure about $60 for that and put the materials cost about around $400, plus about a full week of time spent on it. Don't even ask about the tools and miscellaneous consumables (staining pads, etc). Tools are worth it, but consumables are just something you have to live with.

Saturday, January 8, 2011

A Proper Workspace

When it comes to work surfaces I am no fan of store-bought desks - I'm very angry at them. I need to do real work and to do that I need a solid dependable work surface. Sadly, you won't find that in most stores. Or, if you do, you'll pay $1000 for an 'Executive' desk that does its job well (and usually that job is much different than the one you want it to do). If you buy from a store, you're likely to find something like this in your price range: http://www.officemax.com:80/catalog/sku.jsp?productId=prod1811868

This monstrosity is a disaster. I mean, even the picture alone signifies that they just didn't care. Look at the CRT and circa 1996 inkjet printer. They didn't even update the picture for the new millennium. I'll start picking this thing apart in no particular order:

3) Teak Laminate Top. Teak Laminate Top. When you're just not important enough for real wood, we'll mix sawdust and glue together to give you particle board and plaster it with just enough real wood to make you think we spent money on you. This is low low low LOW quality. And sadly, if you're anything like me you've seen plenty of laminate (and hated it).

6) Penumatic work surface adjustment - how likely is it that the pneumatic parts are similar to the ones you see in office chairs? You know, the office chairs that fail working particularly well after a few years? Yes, for now it's an adjustable work surface, but come three years along you'll have to drill holes through the supports and stick bolts through them to make sure that it doesn't keep dropping on you overnight.

2) Keyboard trays - these are universally awful. For me anyhow. Yes, I'm sure they work for some people but how likely is it that this keyboard tray will work for you? You'll either end up disconnecting the thing (painfully) or developing carpal tunnel. I have not had a single work desk (that I didn't make myself) be 'ergonomic' in any fashion and the keyboard trays were some of the worst offenders. Just look at the wrist rest - bunched up right against the keyboard. That looks painful. Look at the odd angle. Yes, I'm sure it works for some people, and I'm sure it can be altered, but let's face it - they use cheap hardware on these things and they never stay altered. They wear down and flop back to the worst possible position.

7) CD storage on the desk - since when did this ever work at all? Do you own 12 CDs? Then this is the desk for you! Pathetic. You need a real storage solution for CDs and DVDs - not the pathetic attempt seen on this desk.

9) No monitor stand. Really, this is a necessity. Your neck is just as important as your wrists.

4) No cable management. It will look ugly.

1) No front legs. Just try to lean on it and see what happens. And I don't hold out much hope for those metal support parts. They're probably hollow and will bend/break eventually. This is not how you make stable work surfaces.

5) Leg room - nil. Seriously, where do my legs go? Can I stretch them? No I'm going to hit the wall or one of those supports.

8) Too small - it's 29" deep which is good, but only three and a half foot wide. Not nearly enough room to spread out several documents.

But hey, what can you expect for a desk that's only $80? Oh, wait, what's this:

Top component of the Balt Ergo Sit/Stand Workstation - ORDER BOTH TOP AND BASE
Oh, this only half of it, so it's $170 or so right? No? Nearly $400? Oh, well that's a great price....

This will not do. I cannot work on those thin
gs. I don't just need a computer desk - I need a real workstation - computer desk, writing desk, solder station, filing system, parts storage, tools storage and sever closet all in one. I need it to be solid. I need it to not hurt me. I need it to look nice. And I don't want to spend $1000 on it.

So I make them myself.

Now, this may surprise some of you, but I did not start out as an electron pusher. I grew up on a farm and learned about many many things. I can spot weld, cut metal with a torch, help pour concrete, drive a tractor, be a human gate, give a pig a shot, stack hay and shoot guns. I am also not to shabby with woodworking. Wood is my preferred material. It's light, strong and pleasing to the eye. It's also phenomenally cheap and the tools to work it are similarly inexpensive. And when you mess up it's easy to hide (as my dad always said 'A little putty and a little paint makes a carpenter what he ain't').

I started making my own desks with very simple construction. Here's an example of something I hacked together with 2x4's and wood screws that is very similar to my first desks:


Such a table is not difficult to put together. Most of the pieces you'll buy pre-made from places like Lowes. The top is a single 2 ft x 4 ft sheet of 3/4" plywood screwed into the frame. The frame and legs are 2x4's all around - butted up against each other and screwed with cheap wood screws. Everything was cut to length right in the store then brought home for assembly. The only touches of flair are counter-sunk holes for the screws (keeps the screw heads from sticking out) and paint, along with a coat of spar varnish to protect it from the elements.

It's not pretty, but it has its benefits. It is exactly the table I wanted - no more no less. It is solid, very solid. I can stand on it no problem. It won't tip or jostle and the legs stay in place. And it is cheap. Materials were at most $50 and my labor is free (I'm a slave driver).

That's all well and good, but it's still not really a piece of furniture - I wouldn't hand that down to my grandchildren. And it's not very shall we say... featureful. It's four legs and a top. No drawers, no shelves, nada. For a workstation I need something a lot more complicated. I need this:


This is my workstation. I just made it over Christmas with the help of my dad. It's just shy of 8 foot long, 30" deep with a 12" deep monitor shelf along the back - all pine. It has four stylish legs all the way at the corners of the desk for maximum stability, and the part in the middle is also weight-bearing. I sanded the hell out of it and applied two coats of a stain/polyurethane combo. Half of it (on the left) is meant for electronics and the other half is for the computer. There are cable holes on the computer end for easy cable routing. My laptop there is sitting on an Ergotron laptop arm. There are spaces for drawers on both the electronics and computer end. The part in the middle is slide out storage for printers (both my inkjet and laser). Take a look at one of them in action:

Underneath, there's power strips running all along the back of the desk, as well as a piece of pegboard that holds my wireless router, NAS, print server, power adapters and other IT-related paraphernalia. This is temporary at this point and everything is going to go in the drawer when it gets put in later:

There is a second piece that is still waiting to get put in that will support rails for plenty of shelving storage and a bookshelf on top as well as extra lighting. Here's what it looks like right now:



(Oh and don't complain that it's covered in papers. This is what a desk is supposed to look like. If your desk is clean it means you're not doing any work.)

And what did this wonder cost me? Less than $500 at this point, and three solid days of work. Well, the tools cost a fair bit by themselves, but I think it's worth it when you consider what you can make with them.

I'll post pictures of the completed project when it's done.