Most of my programming life has been spent writing glorified scripting code- PL/SQL, VBA, LIL, various add-ins and bolt-ons to an existing application. Except for the database code, where objects are generally locked and ACID, I don't typically take into account threading and multiprocessing.
So, I'm having fun creating a little throwaway app that polls WMI on a remote server and refreshes the results to a little window. Yes, there are probably a million apps that do this better and more gloriously, but when "every problem is a nail"... you know.
It's amazing the sort of problems you can get yourself into doing this kind of thing. First of all, if you want to create a timer, there isn't one but three different ones, each with their own nuances. So, if you go to a page on the Timer class in C#, you get the nice little warning that none of these are thread safe. Ok, so admittedly, I know nothing about programming in a real language. I don't really know what a thread is, or any of those other six letter words those C# programmers like to throw around. But, I can hack.
So, I have a button to start the timer, to stop the timer, and a snippet of code that does the polling and painting. I can write a little one off that runs the snippet, and it works just fine. So, hack em together, build it up, and it works just fine in dev.
Except, apparently, the program seems to leak memory- the app memeory just keeps going up and up and up as it runs. Curious. I walk over to the resident C# expert with my observation, and he says "C# can't leak memory! t's garbage collected!". A minute later, he says "but it can leak resources". OK.
So, my sample code that runs once just fine creates a new instance of an object each time it does the call. So, I make sure I'm not leaking resounces, and the memory problem seems to go away. But, when we apply the code to production, at some point during the night while it's "watching", it suddenly crashes. Turns out, that thread safety thing that I ignored early on actually means something- when you have a timed event, you have to be careful to not let the first signalled event run into the next one, because bad things can happen.
Ok, so I learn a little something about threads. Fascinating stuff- and there's tons of different ways you can implement thread safety. I can appreciate locks- use them all the time in databases. Lock the thread, and the program doesn't ysteriously crash.
So, here's the issue. At the outset, I make decisions without realizing I'm making a choice, perhaps because I only know about one choice or because I'm satisficing on what looks to be the easiest path.
As I grow, I'm aware of more choices, but when I make an active choice, I'm not really aware of the ramifications of each option. It's not practical, either, to become paralyzed by worry of the implications of choice, because we make trillions of choices every day. More satisficing happens.
Programming is something you would think to be highly constrained, and yet at each turn you're faced not only with the decision between what may appear to be logically equivalent options, but also by the law of unintended consequences when using complicated systems.
Far too often, you'll hear someone say "Microsoft sucks!" when referring to their computer, or "It's slow!", when they haven't considered the implications of that animated "SG-1" cursor. Think of all the possible flaws a flashing cursor might have, even constrained to the examples behind the cut. Now, consider that these might not be a problem, except for that "TARDIS" clock icon you also have running, too. Add in the mandatory corporate spyware checker and virus shield, and each computer becomes a rats nest of unintended consequences. It's not the fault of Microsoft, but in how each user, paired with each developer and vendor, makes a series of interlaced decisions, none of which is ever formally decided.
People react to complexity differently, Sometimes, they'll freeze up. Sometimes, they'll choose randomly, or satifice, or defer judgement to others. Sometimes, they won't even notice. Sometimes, they'll deliberately constrain their choices by choosing, say, Apple over Wintel. Sometimes, they'll accept whatever default choices are made for them "out of the box", "off the shelf", or expect that when they "configure", any possible implications of the combination of myriad moving parts are "supported".
Is more choice always ethical, or does it confer ethical responsibilities that are not always appropriate? When you sign an "I agree" checkbox, knowing full well that no-one knows the implications of that choice, have you made an ethical error? It's clear that if you constrain choice (see: Vista) people will scream bloody murder, but to what extent, and under what circumstances, is such squawking irrelevant?