Saturday, May 31, 2008

Starting Windows Service On Install

Sometimes piling up technical debt seems completely unavoidable.


So I am flying along using all the tools and tricks while writing a windows service (aka NT service) using Visual Studio. Life is good. I want a windows service so the IDE has a templated project for it. The result is not to complicated. The templated windows service project leverages the Dot Net Framework which wraps all the difficult Win32 stuff.

Of course I will need to install my application and to say installing a windows service is difficult is an understatement. But once again there is a nice templated class I can add to my windows service project that leverages an excellent Dot Net Framework class. Again the result is clean, although with a small hiccup. There does not seem to be an option or flag or method to say 'start this service on install'.

This is a good example of Technical Debt. The installer for services is not complete. So I have to pay down this particular debt to get what I want. I am still way ahead and the IDE is still real useful.

If I double click the Service Installer object it created for me, the IDE creates the event handler 'sampleServiceInstaller_AfterInstall'. So all I need to do is get a reference to my service which the Framework class 'System.ServiceProcess.ServiceController' makes real easy and set its state.

There is a gotcha here if you try to start it when it is started, it considers that an error. Also if it has any difficulty at all in running this class tends to handle the problem by throwing an exception. Partly this is because the underlying Win32 api's it wraps behave this way. So for now I have an empty try catch handler. Which means I have piled up a bit of technical debt.

Now on to starting the service. The 'ServiceController' class has a service variable and if we create a switch case using the snippet for switch we get most of the following code:


private void sampleServiceInstaller_AfterInstall(object sender, InstallEventArgs e)
{
try
{
// serviceController
//
System.ServiceProcess.ServiceController serviceController = new ServiceController();
serviceController.ServiceName = "SampleService";
serviceController.Refresh();
switch (serviceController.Status)
{
case ServiceControllerStatus.ContinuePending:
break;
case ServiceControllerStatus.PausePending:
break;
case ServiceControllerStatus.Paused:
break;
case ServiceControllerStatus.Running:
break;
case ServiceControllerStatus.StartPending:
break;
case ServiceControllerStatus.StopPending:
break;
case ServiceControllerStatus.Stopped:
serviceController.Start();
break;
default:
break;
}
}
catch (Exception ex)
{
// Unhandled exception
}
}




There are a lot of states there. The one I am interested in is the Stopped state so I make the call
to start the service there.

Which leaves open the other states. I have several states that have pending in their name. What needs to happen is the program needs to wait and query the status again. The pending action will resolve one way or another. But the context makes this difficult. We are in an installer (technically we are doing post install actions, but the user will perceive it as part of the installer).

So we do not want to loop and keep waiting for this to end. We could launch a thread to watch over this situation. But then we face having the installer appear to exit while a thread is still working. It seems no matter where I turn I am going to pile up some technical debt.

Friday, May 30, 2008

Challenge of the Installer

The installer has enough challenges.


The desire to add features that are better served with administrative forms in the program or post install configuration steps is greatest at the beginning of a projects life. This is because everything is relatively simple at this point including the installer.

The installer program's function starts out simple enough. It needs to copy a few files. And even though the uninstall is a separate function everyone considers it part of the installer. So the installer will also need to uninstall the applications as well. But this is also fairly easy.

The challenges start when moving past this base install. Immediately one needs to consider installing on multiple operating systems. By this I do not mean only multiplatform, Linux, Unix, windows, mac and mobile. But within a platform like windows there are multiple OS options. To point out the worst offender, Vista versions have proliferated. There are more than 6 Vista options. Which does not take into account the Windows Server is a separate line and Windows XP is still another option. The Mac platform is not as bad but it still has multiple versions with compatibility challenges. And arguably Unix lost it's grip on IT precisely because of all of the 'flavors' of Unix. Linux was seen as unifying these flavors but that had it's own challenges.

Beyond the OS difference there are multiple platform challenges. Internet browser lead the list of problems. Internet Explorers market share continues to erode with Firefox taking up most but not all the gain. On the one hand a duopoly may not seem bad, but these are two very different browsers. Another emerging platform feature challenge is the media players involved. And in our future the cpu will matter much more than it does know. We should be worrying about 64 vs 32 bit but modern IDEs do a good job of hiding this difficulty from us. This is not the end of the story, it will pop up later. The multicore future may add some complexity as well.

So far we are still pretty good. Perhaps it is not until we need to save state that things get difficult. There are always some configuration options that need to be persisted and this probably would not be much of a challenge until considering the upgrade path. Initially, the installer just installed and uninstalled. But with an upgrade you want some items like configuration or licensing to survive the install.

Frameworks like Java and Dot Net have radically simplified the development process but the install pays a small fee for that ease. Either the installer has to install the framework, make the framework available for install or check for the existence of the framework.

A similar story holds for third party and for lack of a better term second party applications. Databases are common third party application. Second party applications are other applications that your company or strategic partners want to distribute with your application. On the surface it might seem that these should be the easiest of all since you have some influence on these applications. The reality is that whatever you gain from this closeness you lose do to these application changing more rapidly and they are almost always more buggy than long standing third party apps.

I will leave it there. There is more but this should be enough to scare you into working on the installer first and always keeping the installer up to date.

Thursday, May 29, 2008

Delete Team System Builds

Some features of Team System can only be reached at the command line.
Unfortunately deleting a build is one of these.

You will need to use the "Visual Studio 2008 Command Prompt" (or "Visual Studio 2005 Command Prompt"). This has the the 'Path' variable correctly set to find 'TfsBuild'.

C:\Program Files\Microsoft Visual Studio 8>tfsbuild /?
Microsoft (R) TfsBuild Version 8.0.0.0for Microsoft (R) Visual Studio 2005 Team System(C) Copyright 2006 Microsoft Corporation. All rights reserved.

TfsBuild help [command]

command The name of the command you want help on

List of commands:
start Starts a new build on the build machine
delete Deletes completed build(s)
stop Stops the build that is in progress
help Prints this help message


Usually you will need to stop a build before deleting it.

c:\tfsbuild stop http://tfsserver:8080/ "Tfs Project Name" samplebuild_20080529.4

Then you can delete it:

c:\tfsbuild delete http://tfsserver:8080/ "Tfs Project Name" samplebuild_20080529.4

Wednesday, May 28, 2008

Feature Focus or Bug Focus

What is more important Features or Bugs?

If ever there were an acid test for maturity in Software Development perhaps none is better than Features or Bugs. What is most important implementing a new feature or removing existing bugs?

If a developer is focused on features then they have a Feature Focus. Bug Focused developers give precedence to fixing bugs first.

Which is better? Fixing bugs. Which do new developers prefer? Implementing new features. Only over time do developers realize that as their bugs accumulate, their ability to implement new features becomes ever more difficult. One day they will experience the truly awful feeling when a newly revealed bug shows that a recently added feature will have to be completely reworked.

So what do you do if you have a crew of Feature Focused developers? This is a common problem. One solution is to use QA to give them an external souce of Bug Focus. To do this, you must be prepared to step in and support QA. At the same time you must dampen the praise given for new features. This is also simply done by insisting that only releases which have passed QA will be demonstrated.

Tuesday, May 27, 2008

Displaying System Info the way Office does

Office 2003 had a button 'System Info' on the about page.

It brought up a dialog with the title 'System Information'

This dialog, which is actually another application displays sections for:
-Hardware Resources
-Components
-Software Environment
-Internet Settings
and others.

This is handy information when troubleshooting issues on a clients computer. By putting a button on your application to launch this, you lower the mental barrier for your clients.

Now using task manager you can see this results in the process 'helpctr.exe'.
But if you run it directly you get the application running with the title 'Help and Support Center'.
Alas it has no command line help switch (/? -? /help).
What office is does is call 'msinfo.exe' which in turn invokes:
%windir%\pchealth\helpctr\binaries\helpctr.exe -mode hcp://system/sysinfo/msinfo.xml

Here 'msinfo.xml' provides some configuration information. Try messing with the 'Width' and 'Height' elements to see this. But what is most important is that the element 'TopicToDisplay' which has a with value 'msinfo.htm' controls which topic, in the form of an HTML page, is loaded.

You can experiment with this. Save a copy and change to 'TopicToDisplay' to 'sysHealthInfo.htm'. Then invoke 'helpctr' with your altered XML file.

permalink

Monday, May 26, 2008

The Four Faces of a Software Application

Ever feel that no two people see your Software Application the same way?


Unless it is notepad or you are not going to share your software with anyone you will notice four distinct collection of scenarios for your application; Installation, Configuration, Administration and User.


Installation is the easiest to overlook because every well maintained program makes installation a breeze. An ease which masks the difficulty of the problem. Today's applications are rapidly developed in days, sometimes just hours and for the talented perhaps in just minutes. The rich feature set that is immediately available comes from leveraging powerful and extensive frameworks. Getting those frameworks on the target machine amounts to an install within an install. Which brings up a hard unalterable rule of Software Development. "Create the Install program before you have anything to install." Less mature developers and nontechnical managers will not understand this focus on the install first. If you have good QA they will save a company from itself by insisting on an installers.

After an application is installed it will be configured to its environment. Initially an ever growing amount of configuration will live in the installer. Time will reveal that this bloats the installer and turns testing the installer into a nightmarish complex exercise. Eventually, separating the installation from configuration is required. Where those configuration components live then becomes a delicate trade off. If you can rely upon your users, you may be able to leverage a large amount of the System OS configuration components. If not, you will need to wrap them and expose them in your administrative interfaces.

Skipping over Administrative for a moment, once the application is installed and configured it should run without any care and feeding. Thus all of the things related to the Install, Configuration and Administration are merely noise to the end user. The user interface should focuses only on how to get work done on a day to day basis.

There should be a separate interface for Administration which is now defined as those exception or unusual cases of interacting with the program. Troubleshooting or diagnosing failures is also included here. The Administrative interfaces should anticipate that there will be failures and provide some means of working within the app to report them at the very least.

Sunday, May 25, 2008

So How Long Is Team Server Taking to Backup?

The backups of our Team Server came into question the other day.
It was suspected that the backups were running for a very long time and during this long run they were degrading the performance of other aspects of Team System.

One of the cool things about SQL Server is nearly everything about SQL Server's operation that can be stored in a database, is stored in a database. The database 'msdb' contains information about backups in the tables 'backupset' and 'backupfile'. So the following query will show how long it took to run backups (adjust the date):


SELECT
s.backup_start_date,
s.backup_finish_date,
s.database_name,
f.logical_name,
f.file_type
FROM backupset s, backupfile f
WHERE s.backup_set_id = f.backup_set_id
AND s.backup_start_date > '2008-05-20 10:01:16.000';

Saturday, May 24, 2008

The Golden First Year

You will never get paid more than you do in the first year of programming.

While the actual salary will be low, the ability to put on your resume that first year will pay dividends year over year for the rest of your life. Some people call it the Magical First Year. The phenomenon is real and overpowering. It started before the Net Bubble, although you could argue it was most noticeable during the Bubble.

But why do companies persist in this pattern. Having acquired skill or technical knowledge in specialty after specialty over the years I have noticed it all pretty much happens in 90 days. That is, if I am focused on an area, I will acquire nearly everything needed in the first 90 days. Some residual learning occurs over the remaining part of the year but it is small. The second year is nearly empty of significant growth.

So one wonders why don't companies hire 0 year experience and write off the first three months. Or better yet why not conduct a training year. To be sure quite a few companies do opt for a 'boot camp' or training program. These seem to be diminishing. I do not have hard numbers, just what I observe in the market. I suspect the problem is that after that Golden First Year is in possession of the employee, the market beckons and the employee cashes in with another company.

But what about on an individual project level? Perhaps that 9 months will payoff the start up 3 and most projects do not last multiple years anyway. Well this is also true but the sad truth is the projects are never started with 3 months of lead time. So this amount to starting 3 months behind and hoping. You see there is a risk that the person will not get the skill in 3 months. So then what? Start over? Give up another 3 months?

I do not see anyway out of this dilemma. It all comes down to the brutal market place. Skill is the coin of realm.





Friday, May 23, 2008

Snippet Definition Commentary

Snippets are probably the most powerful code generation tool available in Visual Studio. The power result from simplicity blended with extensibility.

To better show this combination of ease and flexibility I am going to dissect the 'class' snippet.

It is stored as XML so it has the xml directive to start and the top element is 'CodeSnippets' which has one or more 'CodeSnippet' elements so the boilerplate for all snippets looks like:


<?xml version="1.0" encoding="utf-8" ?>
<CodeSnippets xmlns="http://schemas.microsoft.com/VisualStudio/2005/CodeSnippet">
<CodeSnippet Format="1.0.0">

</CodeSnippet>
</CodeSnippets>




A 'CodeSnippet' element has two child elements 'Header' and 'Snippet'
Header contains information about how the snippet should be used.
Snippet defines how code generation.



<CodeSnippet Format="1.0.0">
<Header>

</Header>
<Snippet>

</Snippet>
</CodeSnippet>




The 'Header' element has child elements of 'Title', 'Shortcut', 'Description', 'Author' and 'SnippetTypes'.
Visual Studio uses the 'Shortcut' element's value to autosuggest the snippet. In this case the value of 'class' which is also a keyword helps suggest this snippet at the right time.
Since there is no restriction on uniqueness of snippet names, the IDE will show that there are multiple snippets for the same snippet shortcut.
The 'Title' and 'Description' will allow users to select the appropriate one.
Another key element with snippets is the 'SnippetTypes' which can have one or more 'SnippetType' child elements.
In this case it indicates that the class snippet can be used to both expand and surround code. The surrounds value indicates that if some text is selected and the snippet is invoked the selected text will be used.


<Header>
<Title>class</Title>
<Shortcut>class</Shortcut>
<Description>Code snippet for class</Description>
<Author>Microsoft Corporation</Author>
<SnippetTypes>
<SnippetType>Expansion</SnippetType>
<SnippetType>SurroundsWith</SnippetType>
</SnippetTypes>
</Header>




The 'Snippet' element is where all the code generation work is defined.
In order for the snippet to something more powerful than copy and paste we need a way to declare some placeholders.
That is what the 'Declarations' element with 'Literal' child elements define. The 'ID' child of 'Literal' in this case is 'name' in the 'Code' element this is important.
The 'ToolTip' provides context sensitive information when using the element. The 'Default' element give the 'Literal' a value at the start.

The 'Code' child element of 'Snippet' has the body of the code to be created. When this is done the user selected values of the 'Literals' will be substituted.
So the 'Literal' with 'ID' of 'name' is seen here as '$name$'. There are a few built in and implied literals as well.
When the snippet is used in 'SurroundsWith' mode the selected text provides the value for the implied '$selected$'


<Snippet>
<Declarations>
<Literal>
<ID>name</ID>
<ToolTip>Class name</ToolTip>
<Default>MyClass</Default>
</Literal>
</Declarations>
<Code Language="csharp"><![CDATA[class $name$
{
$selected$$end$
}]]>
</Code>
</Snippet>





Now having all the details and a working example
all one needs to create a snippet is a chunk of code
that is constantly being used.
Simply put it in the 'Code' element.
Then repeatedly extract literals from the code.
Define the literals in the header and you are done.

Thursday, May 22, 2008

How to think of QA

Supppose you had a Friend.

You know, pretend you are back in school taking some really hard courses with difficult assessments. Now suppose all your friend did was check your homework before you turned it in.

How would you feel about that? How would it change things?

It is odd that all Software Developers do not feel this way about QA. But you can tell the Software Developers who have reached maturity because they understand QA is their best friend. And over time they have figured out the homework check dynamic and it changes the way the operate.

Clearly it is next to worthless getting your homework checked right before the bell rings. If your friend finds anything you will only have time to change the most simple stuff, spelling, plus or minus sign, grammer, elimination of extraneous solutions. If you have something fundamental wrong you may need to change everything.

The first step towards QA acceptance is when Software Developers start proactively getting their code to QA early. Some even start to pester QA to test their stuff first. QA loves this. Just like your friend who checked you Algebra homework in highschool, everyone likes to be needed.

The next phase follows naturally, Software Developers start getting their work to QA often. As Software Developers mature from a Feature focus to a Bug focus on their work they realize, it is easier to get Feature implemented Bug free in isolation. As such they want each individual Feature tested as soon as it is available. They also soon encounter Daisy Chain Bugs and Regression Bugs.

Daisy Chain Bugs, Regression Bugs, Requirement Bugs and more exotic Bugs takes us into the realm of Software Development Management. Software Development Managers that have reached maturity understand how critical good QA is to sustained success in software development. The key their is the sustained. It is possible to get lucky and implement a bugless feature. The difficulty arises when you want to add another and add another. The final maturity revelation that is reached is that:

Good QA can save you from bad programming, but no amount of good programming will save you without QA.

Wednesday, May 21, 2008

Retrieving GPS data from Sprint Aircard

The dot Net Framework has so much in it that it is more likely to have something than not.

I remember browsing over the "System.IO" namespace and being amused that it had:
"System.IO.Ports.SerialPort"

Thinking that my serial communication programming days were long over, I chuckled and moved on.

So I was surprised to find that the Sprint Aircards that we are using to stream video from Police Crusisers to the world, are exposing the GPS data continuously on com port. A little bit surprising because the connection is a USB port. So thanks to the dot Net Framework, I just have to write 16 lines of code and I now have location data.

Now looking for killer app here, I can pass this location information to MS Live Maps along with the donut parameter and I can bring up a map showing the police officers location together with nearest donut shops.

Tuesday, May 20, 2008

Use the force (intellisense, snippets, refactor).

Let go of the compiler. Use the features Luke.

I received feedback one of my posts that a line of code was too complicated for average to starting programmers:


private static IEnumerable<list<int>> permutations(int size)



The interesting thing about this line is I did not write it.

I did not even write all of the Line that calls it:


foreach (List<int> permutation in permutations(9))

I do recall my thinking as I was creating it. I was thinking I want to loop over a bunch of collection of permutations dealing so I typed in the first letters of 'foreach'. Intellisense kicked in and suggested the 'foreach' snippet. I hit tab twice to accept it and the following code was generated:


foreach (object var in collection_to_loop)
{
}

The snippet highlighted the snippet literals to be replaced. I had not thought what type a permutation would best be. So I used a list of integers 'List'. I already knew I wanted a permutation so naturally the collection would be called permutations. But since I knew I wanted to experiment with yield I turned it into a method. Which left me in the same position as those 'typical' readers. But selecting 'Generate Method Stub' from the context menu resulted in complex type.

What is interesting is how the compiler has moved from nagging us about our mistakes after the fact to proactively helping us write code. So I stop worrying about the compiler and it know worries about me.

Monday, May 19, 2008

Limitations of Snippet Functions

Perhaps you have experience the 'Magic Moment' (to use Disney lingo), when you were coding in C# and you needed a switch statement. If you had an enumeration defined already and the switch statement was using a variable of that enumeration type. Then almost like magic the body of the switch statement was filled out with case statements for each of the enumerations.

Up to that point the Snippets felt like just a cool way to insert common boilerplate code. Well unfortunately that is pretty much still the case. The reason is that in order to get this functionality you must use a xml tag "Function" in your snippet definition. But that is the end of the line. No doubt the "Function" element in the future will support more but right now it supports only the three functions "GenerateSwitchCases", "ClassName" and "SimpleTypeName".

"GenerateSwitchCases" not surprisingly generates the cases for the switch statement.

"ClassName" returns the name of the class that is currently in scope.

"SimpleTypeName" tries to simplify the name subject to the context where the snippet is invoked.

Nothing else is available.

Sunday, May 18, 2008

UAT Environment

User Acceptance Testing, UAT for short, also goes by the name Demo or Demonstration.

If you have a somewhat typical set of environments setup to support software development, they will correspond roughly to the following:

Development (DEV)
Build
Integration (INT)
QA/Test
UAT
PreProduction
Production (PROD)

How these are used also varies slightly from one company to the next but some key concerns are addresses at each level.

Build ensures the code is in Version Control and always builds.

Integration ensures the code under development and third party code, software and legacy systems all still work together.

QA or Test validates the ever increasing feature development and bug fixes.

So it may seem like UAT is overkill. But it is precisely because the previous (lower) environments have checked out and passed on all these details that allows UAT to provide the first chance for the Customer to provide their feedback or approval. The system is in a state where it could be deployed , all of the bugs and issues have been resolved.

But it is precisely because the Customer is providing feedback here that Software Developers will be most insistent on gaining access to the system.

One reason is familiar, they need to fix something. The answer to this is the same as it was on QA, fixes should flow throught the process to keep the version integrity. (Although QA should have already found the bug so if this is a new bug, some investigation should be done to determine why QA missed it.)

Another reason is the Customer is changing their mind. Here Acceptance is called for, not only will the Customer change their wishes, they may never stop. Sometimes they flip flop between several implementations. Sometimes there is more than one customer and they cannot agree. In this case you will need to find a way to make the different options configurable. This will need to be documented in a Use Case and follow the process of every other enhancement.

But the best reason for keeping Developers off of UAT is to see "Can this solution live without the constant care and feeding by Developers?" Even better "Can mere mortal users actually use this system?"

permalink

Saturday, May 17, 2008

Usable Use Case

"Maybe we should buy a book?"

Never lose sight of why Use Cases are written. Their purpose is simple, they communicate requirements. Specifically they are the requirements of how a system should function from the users point of view.

The best Use Cases are those that are the most simple while not losing clarity on requirements. As such, over time a common format has appeared. Usable Use Cases specify the System, the Actors using the System and the Goal of the Use Case.

For an individual Use Case when specifying the System, what is of interest is the state that the system is in at the start of the Use Case. Then what will be the state of the system at the end of the Use Case. What should not be a concern of the individual Use Case is the complete description of the System. Also the Use Case should be completely silent on how the System should be implemented.


Since the Use Case describes from a user point of view there will be one or more Actors on the System. If possible subdivide the Use Case to lower the number of Actors involved, if this can be done without losing clarity. The other thing to address is how Actors of differing permissions or roles will be allowed to use the System. Four typical Actors are Administrator, Power User, User/Guest and Non User.

The Goal of the Use Case will dictate the clarity of requirements. The Goal is what the User is trying to accomplish with the System.

This all seems simple enough and if you stop here your Use Cases will serve their purpose well. But no one can leave it alone. One common antipattern is to make the Use Cases compile. There is a lot of repetitiveness in the Use Cases and if the Architecture matches up well it seems like we could almost take the Use Cases and generate code from them. If you are intent on doing this go ahead get it out of your system, discover the issues with it that everyone else has discovered. Wisdom comes from experience and experience comes from mistakes.

Another Use Case antipattern is exception processing. You will have exceptions but they should show up in your use case as terminal statements. The overwhelming majority of the use case should be dedicated to the normal flow of the system. If this is not the case consider breaking the Use Case into more multiple Use Cases. Perhaps more Actors or System States need to be enumerated.

permalink

Friday, May 16, 2008

The Power of Yield

Generating all the permutations of a set is a little bit dry and academic but it illustrates some of the power that yield gives us.


To start here is the code which will ask for all the permutations and print them out.

///////////


static void Main(string[] args)
{
foreach (List permutation in permutations(9))
{
string result = string.Empty;
foreach (int number in permutation)
{
result += number.ToString();
}
Console.WriteLine(result);
}
Console.ReadLine();
}

///////////

If you use the refactor tools of Visual Studio it will generate the shell of the following function.
For the body, recursively call the permutations(int) function, then place the current item in all positions of resulting permutations. Of course the recursions have to stop somewhere so on the smallest set, a set of one, just return the list with one item.

///////////


private static IEnumerable permutations(int size)
{
if (size == 1)
{
List list = new List();
list.Add(1);
yield return list;
}
else
{
foreach (List permutation in permutations(size - 1))
{
for (int i = 0; i <> list = new List(permutation); list.Insert(i, size); yield return list; } } } }


///////////

If you run this code, it generates all permutations as expected.

What is surprising is some of the performance aspects of this code.
Even though there is recursion involved here the recursion is only invoked 'n' times.
At the same time the memory being used is also a constant times 'n' squared.
And an additional kicker the number of cycles of execution is a constant (I think between 3 and 4) times 'n' factorial.
(The last is actually pretty good. If you had a really long program that hardcoded each permutation it would take 'n' factorial lines of code.)

If you study your Knuth you could probably unearth some code that performs better on one or more of these aspects. But the readability of such code is a bit obtuse and slight changes result in disasterous results where the errors are not obvious.

Compare this to how natural it was to write this code. The code reads as a description of how to generate permutations.

The performance comes from the strange nature of yield. It converts the method into a Generator instead of a Subroutine.

permalink

Thursday, May 15, 2008

All that Team System

Say anything about Microsoft Team System and chances are you will be wrong because your information and/or understanding of Team System is incomplete.

Version Control is all new. Most of what you know based on SourceSafe no longer applies. Much of the rationale for using alternative version control systems is no longer true. Keep your eye on changesets.

Risk Tracking is a new feature introduced with Team System. You may have developed or purchased a separate system for this. You will have to reconsider because this Risk Tracking system is natively integrated with Version Control.

Bug Tracking is also a new feature of the Team System. Again you may already have one in place. And again you will have to reconsider because the Bug Tracking system is natively integrated with Risk Tracking and Version Control. Notice a pattern? It goes deeper recall those changesets that Version Control uses? The resolution for a bug can include links to the changeset that fixed the bug.

Work Item tracking follows the same story, integrates with Bug Tracking, integrates with Risk Tracking, integrates with Version Control. Again huge benefits from linking the changesets that satisfy the Work Item.

Unit Testing, Regression Testing, Performance Testing, Environment Testing; again integration with all the rest of Team System.

Likewise the Build features of Team System.

Now it is possible that you have cobbled together a collection of systems that do all of these things. Perhaps you have even integrated them and managed to get them to work together. If this is the case then there are two very probable outcomes. One is that significant effort goes into keeping the patch work collection of systems working. If not then certain quirks, annoyances and bugs is just tolerated and worked around on a daily basis.

But that isn't the end of the story. There are more features to Team System, plus we just got an update with version 2008 that contains even more. One could despair of ever catching up on this one offering from Microsoft.

But there is hope. When Visual Studio came out there was a similar flood of features, and with each new version of Visual Studio more tools were added. We never caught up with Visual Studio. In a way we never had to catch up. We just started using it. The more we learned the more efficient we worked. It was always a balancing act between getting it done and spending time learning how to do it easier. We survived the Visual Studio feature flood. We will survive the Team System feature flood. And what a great problem to have!

permalink

Wednesday, May 14, 2008

Yield The Danger

In "Return Yield and Return"
http://andrewboland.blogspot.com/2008/05/return-yield-and-return.html
I pointed out the odd implementation and behavior of the yield statement.

Here is an example of unexpected behavior of yield. Consider the code:

class Program
{
public static int globalCount;
static void Main(string[] args)
{
foreach (int number in collection())
{
Console.WriteLine(number.ToString());
SecretFunction();
}
Console.ReadLine();
}
private static IEnumerable collection()
{
globalCount = 1;
yield return globalCount;
globalCount++;
yield return globalCount;
globalCount++;
yield return globalCount;
}
private static void SecretFunction()
{
//
}
}

/////////////////
As written this results in the output:
1
2
3

However if the body of SecretFunction() is replaced with
globalCount = 0;

The resulting output is:
1
1
1

According to the rules this is all correct but this might come as a surprise.
We are used to considering functions in isolation. In fact this is the prime reason for functions. By isolating code we reduce the cognitive load required. But given the way yield is implemented this cognitive load is radically increased. We cannot mentally checkoff the collection function, we have to be aware of this behavior.

It gets worse if we touch this code. Consider if we change the collection function with this allegedly equivalent code:

private static IEnumerable collection()
{
List list = new List();
list.Add(1);
list.Add(2);
list.Add(3);
return list;
}

/////////
Under both implementations of SecretFunction, it results in:
1
2
3

/////////
What are we to think here?
There is no doubt this will result in unexpected bugs being coded as result of this obscure behavior of yield.

permalink

Tuesday, May 13, 2008

Coroutines in C#

There are numerous statements that the yield statement in C# uses Coroutines or results in Coroutines. This is not precisely correct. What the yield statement creates are Generators.

To recap some terminology, Generators like Coroutines are generalizations of Subroutines. Both Coroutines and Generators allow multple entry points to allow execution to be temporarly suspended and then later resumed. Whereas a Subroutine have only one point of entry and they return only once.

What distinguishes Coroutines from Generators is what is returned in the yield. Coroutines yield the next Coroutine to be invoked (or resumed). Generators yield a value to be returned to the parent function.

When yield is used it must be within an iterator block or a function which return IEnumerable. As such the resulting function is a Generator and not a Coroutine.

The good news is that Coroutines can be created or emulated from Generators by using a Dispatching function.

Consider the following code in C#:

class Program

{

static List stringBuffer;



enum NextMethod

{

Produce,

Consume

}



static void Main(string[] args)

{

stringBuffer = new List();

Dispatcher();

}



private static void Dispatcher()

{

IEnumerator enumeratorProduce = produceCollection().GetEnumerator();

IEnumerator enumeratorConsume = consumeCollection().GetEnumerator();



IEnumerator enumerator = enumeratorProduce;

while (enumerator.MoveNext())

{

NextMethod next = enumerator.Current;

if (next == NextMethod.Produce)

{

enumerator = enumeratorProduce;

}

if (next == NextMethod.Consume)

{

enumerator = enumeratorConsume;

}

}

}



private static IEnumerable produceCollection()

{

int i = 0;

while (true)

{

stringBuffer.Add(i.ToString());

i++;

yield return NextMethod.Consume;

}

}



private static IEnumerable consumeCollection()

{

int i = 0;

while (true)

{

Console.WriteLine(stringBuffer[0]);

stringBuffer.RemoveAt(0);

yield return NextMethod.Produce;

}

}

}






Ideally we would like produceCollection and consumeCollection to yield control to each other.

Here the same effect is accomplished by yielding an enum that is then interpreted by the Dispatcher function to invoke the correct Generator. If you set breakpoints on all the lines of the code you will see that execution within the Generator routines produceCollection and consumeCollection picks up on the line where it last left off.

permalink

Monday, May 12, 2008

Separation of Concerns

Software developers can test do all their own testing.
But it will not work out.

Software developers can also plan and envision the product.
But this overtime will go astray.

What drives sustained success in software development is Separation of Concerns.
Which sounds like Specialization but is not.

The best example is Accountants and Auditors. An accountant can by changing the rules, change the profitability of the company (think depreciation). So over time accountants have showing a profit, and ever growing one at that, as their main concern. Auditors on the other hand are only concerned with how the accounting processes and practices are done. Auditors have no concern for the whether a profit is shown but rather that the profit and loss number was arrived at in an acceptable and consistent way. When your accountant becomes your auditor as well, or when you auditor accepts at face value all that your accountant does you can expect an Enron type outcome.

Software Development is different because it has three concerns that need to be separated; Specification, Implementation and Validation. It is better to think in these terms rather than the roles that they are usually mapped onto; Business Analyst, Software Developer and Tester/QA. Business Analyst implies some concern for the business as whole. Test is close although it emphasis is on bugs and Quality Assurance takes the focus on Verification.

Specifying what is to be created must be completely separate from Implementation, which is how it will be created. Time and time again the pattern repeats of software developers pushing back and not wanting to implement something because it is hard. Or it does not lead to using the technologies or practices that are currently fashionable. And by fashionable, I mean well rewarded by the market place. If you hold both concerns of Specification and Implementation at the same time the temptation to make a comprise on Specification to benefit Implementation is immense.

When it comes to holding Implementation and Validation concerns at the same time the problem is even worse. Testing may reveal a problem. Knowledge of what it takes to fix it hinders the desire to immediately log it and raise it as a concern. Here again compromising Validation to benefit Implementation is the easiest route.

To keep this in the proper length I will state the next four and return to them later with separate articles:

Specification is about serving the customer.

Testing is about isolating the problem.

Development is about solving the problem.

The process provides the framework. The process is the automated assembly line of software development.

permalink

Sunday, May 11, 2008

Return Yield and Return?

Debugging some c# code involving the yield statement revealed some interesting behavior.
The implementation of the yield statement is more complex than it appears on the surface.
Consider the following code:

class Program
{

static void Main(string[] args)
{
foreach (string state in collection())
{
Console.WriteLine(state);
}

Console.ReadLine();
}

private static IEnumerable collection()
{
// Console.WriteLine("before Alaska");
yield return "Alaska";
// Console.WriteLine("before Alabama");
yield return "Alabama";
// Console.WriteLine("before Kentucky");
yield return "Kentucky";

}

}

///////////////////

As expected this code results in:
Alaska
Alabama
Kentucky

Now uncomment the additional Console.Writeline statements and the code will result in:
before Alaska
Alaska
before Alabama
Alabama
before Kentucky
Kentucky

Unless you have discovered this already this should be unsettling. If you are not seeing what the big deal is set breakpoints on every line of code (stepping through is not enough). What you will see is the code appearing to hop in and back out of the collection function and then back in at the line after the line it left on. It is not executing the function repeatedly and it is not exiting it completely.

This behavior has few equivalents anywhere else in the c# language. There is power here (probably not widely known) and there is danger here (probably now widely understood).

permalink

Saturday, May 10, 2008

Proof that 2 = 1

This is a gem.

Take any two numbers that are the same, like 2 and (4/2), call them 'A' and 'B'.
Thus A=B

Start with identity A^2 - B^2 = A^2 - B^2.

Substitute for one A: A^2 - B^2 = A^2 - A*B

Factor both sides: (A-B)(A+B) = (A-B)A

Cancel the term (A-B): (A+B) = A

Substitute A = B: B+B = B

So: 2B = B

Cancel the B from both sides: 2 = 1

It's an oldie but a goodie.

Friday, May 9, 2008

Microsoft Virtualization 360

Surely 2007 was the Virtual Year.

Although Virtualization has been kicking around for some time, 2007 saw some easy to use and free versions. Add to that the time is ripe in terms of hardware. Virtualization has one OS often running on top of another. That can be quite the performance hit. The result was virtualization went from a power user or guru trick to the buzz of 2007.

So while 2007 kicked open the door in terms of price, availability, ease of use and widespread acceptance. The winter of 2007-2008 saw an ever increasing number of offerings and counter offerings. The offerings have gone beyond price and performance. Microsoft enumerates 6 uses or promises:

1. Virtual Presentation

2. Virtual Application

3. Desktop Virtualization

4. Server Virtualization

5. Virtual Storage

6. Virtual Network


Microsoft is marketing these under "Virtualization 360". Which is reassuring since so much is going on in this space, it is nice to know Microsoft is trying to gather it's offerings in one place. Judging from the presentation there is a total of 5 offerings:


1. Microsoft Virtual PC

This is listed as the Desktop Virtualization solution but it can run server OSes. Chances are good this version (and multiple copies at that) is already running in your company. The reason is simple, it is free, it is easy to use and it will run legacy OSes as well as Server OSes.



2. Windows Server 2008 with Hyper-VTM

Currently if you want to host a virtual OS directly on the machine without the host OS getting in the way you will need to use the Server Version. If you need to use 64 bit OSes you will also need the Server Version.

3. Windows Terminal Services


Terminal Services have been around for quite a while now. Microsoft wants us to think of this as Presentation Virtualization. The application will be run on one machine but the presentation of the application can be presented on many machines.

4. Microsoft Application Virtualization
This is similar to the Windows Terminal Services. Here the application is run over the network without being installed. The key here is that the configuration layer that the application uses is separated from the OS. The intent here is to reduce the conflicts created when different applications attempt to configure the platform during their install. Instead this configuration information is kept on the central location. This opens up an added advantage of centralizing the updating and patching of applications.


5. Microsoft System Center Virtual Machine Manager

And one server to store them all, and rule them all.... After working with the Virtual PC 2007 for just a few days, I had so many images of machines that I connected a 1 Terabyte external hard drive to my desktop just to store them all. While this stop gap solution gave me space for a few hundred images, keeping them organized is still quite the challenge. Plus as soon as you start working with one machine you are changing it. So you will need a way to generate new copies of your original machine. Then there is the problem of sharing the images with coworkers. We routinely have dozens of images that illustrate active problems.

So there you have it that is what Microsoft considers the complete circle of virtualization.

permalink

Thursday, May 8, 2008

The QA Enviroment

The QA Enviroment

In Seven Development Environments:
http://andrewboland.blogspot.com/2008/05/seven-software-development-environments.html

I asserted that developers access to higher environments should be restricted. And above the QA, developers should have no access.Developers do not understand this and object. Since the objections are plentiful and compelling, a reasoned response is warranted. Both the objections and the responses vary by the environment. QA, UAT and Prod have similar but differing reasons given for developer access.Thus I will group these by environment and take on the QA ones first.

So the top three compelling reasons for access QA (Quality Assurance, also known as Test) fall into one of:

1. I need access to QA so I can install the program.
2. I need access to QA so I can fix the bugs they find.
3. I need access to QA so I can understand or diagnose the problems QA finds.

What I like about these is they are versions of "I need it to get the job done."

1. I need access to QA so I can install the program.

This is common and reveals a deeper problem or problems with the software development process.Either there is no installer or setup process for the software. Or the installer/setup is broken. Or the software requires prerequisites or extensive configuration.
Taking the last of these first, the need for prerequisites or configuration post install.The reply here is that this means the installer or the setup process is incomplete. It is fairly easy to have the installer program check for prerequisites. These days is also very easy to have the installer program include the prerequisites and install them if they are missing.Likewise, configuration post install means that there is some documentation missing. The QA team should be provided with setup instructions. If it is too complex for QA then it will be too complex for your customers.

2. I need access to QA so I can fix the bugs QA finds.

This sounds noble. And usually there is a follow on objection, "I need to fix this bug so QA can keep on testing."
But this starts a really bad habit. If the developer can change the release being tested by QA, what is it that QA is testing?
Say the version of the release that QA is testing is "1.0.23".
While that version is being tested the developers should be working on "1.0.24".
So what do we call this modified version?And then what is the QA supposed to do? Both during the time the developer is messing around in the QA environment and after?Is QA really testing that "1.0.23" release or is it now something else?
The best outcome is for the developer to spend a brief amount of time to understand the problem and then go work up a solution in the development environment.

3. I need access to QA so I can understand or diagnose the problems QA finds.

Here is where the developer needs to hold back and let QA do it's job. When QA finds a bug it should isolate the bug and provide reproduction steps.If the bug cannot be reproduced it is not a bug. Insisting on reproducible steps will help in diagnosing the problem.

permalink

Wednesday, May 7, 2008

Visual Studio Pricing

Good luck figuring out Visual Studio pricing.

Once upon a time it was simple. You bought a MSDN subscription for under $1000 and Visual Studio was free. The version that was included varied by name but it always came out to be the most complete version. In addition to the Studio you received nearly a copy of every single Microsoft software offering for development purposes. This deal was so good for developers that some developers who could not talk their employers into buying it, would buy on their own. Stories circulated about guys having arguments with their spouses over the purchase.


You can look in vain but this easy option is gone. Today's most complete edition of Visual Studio is the Team Edition. You can buy it this way but it will set you back $3000, $5000 or $10000. I have researched this and do not understand the price variances but let's use the $3000 figure. This is clearly not a slam dunk. Look at the release schedule 2003,2005, 2008. This version will be outdated in just a few years. You will be needing to upgrade soon.


Microsoft makes the case that this need to upgrade often is why should go for one of their subscription based or open licensing options. This is also going to make your head swim. Their is Open Business, Open Value, Open Value Subscription, Software License Agreement and Enterprise Agreement.

The complexity comes from the additional conditions to qualify for each of these.

So lets touch base with the retail option, which Microsoft calls "Full Package Retail".
The Standard version of Visual Studio is listed at $299. The Professional is $799.

Trying to get price quotes on the open/select/enterprise is difficult is hard to say the least. The site http://www.ms-gearup.com/ is supposed to make this easy but the best quote I could get out of it was $536 for Professional + MSDN. Now that seems like a good deal except it a one year license. Which is makes no sense when comparing it to "Microsoft Sales Tool Kit". It simply does not match any of the listed programs.


If there is any bright spot in all this licensing it is in the Partner Programs. When you look at what is required, it would seem nearly every business could find a way to qualify. Of the three levels, Registered has no significant requirements and no cost. At Registered Level you can get the "Action Pack" for $299. The Action Pack include a Visual Studio license plus numerous useful licenses . But at Registered you could also get the "Empower" program which gives you 5 Studio Licenses plus MSDN. Empower is $375 and has some tougher requirements, but it is hands down the best deal to get a cash strapped software shop up and licensed.

If you can put time and resources into becoming a Certified Partner the direct cost is $1500 and you get 5 Licenses plus MSDN. Gold has the same direct cost of $1500 but kicks in 10 Visual Studio Team for Developer licenses plus MSDN.

The Partner programs grant a large number of licenses which includes Windows Server, Office and numerous others. So while this seems like the hands down best deal, the requirements to qualify will take some effort. And if you are a sole proprietor, $1500 a year won't seem like a good deal.

permalink

Tuesday, May 6, 2008

Seven Software Development Environments

How many environments are enough?

Well one is definitely not enough. It is usually a sign of deeper problems that developers do not insist on separate environments for development and testing. One problem could be simple ignorance. In which case a few definitions are in order. So here are a typical seven.

  1. Development
  2. Build
  3. Integration
  4. QA
  5. UAT
  6. Staging
  7. Production

They go by slightly different names from company to company but you should be able to recognize them by their function. It is also possible that several have been collapsed into one.


Development: This is the wild west. Developers get to do anything here and they will. As such anything proven here is suspect. Typically the name is shortened to "Dev".


Build: Hopefully source control is in use, if it is there is a chance automated builds or integration builds will occur on a machine that is not a development machine.

Integration : As the name suggests this is were everything comes together. A typical software solution now routinely involves multiple applications under development, a few legacy applications or libraries, some third party tools and servers. And to set it all up usually requires some configuration. This is the first enviroment were application installed. Team leads worth their salt will restrict access to this environment. Perhaps read only access will be allowed to team members but the lead should insist the problems be fixed in dev. Sometimes goes by the name "Int".

QA: Another name is "Test". Developers should have no roll in this environment. If the QA team is good they will not even let development leads have access. QA testors should be able to handle the deployment to QA without the developers. If they can't then you have bugs or lack of documentation.

UAT: The longer name is "User Acceptance Testing". Here is were the client gets a chance to provide feedback. Since QA was able to install on QA, either QA or if you are lucky enough to have a Configuration Management Team will handle this transfer. Note at this point developers are no longer needed to keep the application working.

Staging: Sometimes this go by the name "Preproduction" or PreProd. This environment is handy for reducing down time and handling security issues. It is also common for the owners of the site to refuse the developers of the site any access to this environment.

Production: Shortened to "Prod" and this is the final enviroment, the real environment.

permalink

Monday, May 5, 2008

512 Cores by 2017? Can I get more?

Call it the Agarwal corollary to Moore's Law.
Anant Agarwal predicts that the number of cores in the computer will now start doubling every 18 months.
http://www.eetimes.com/showArticle.jhtml;?articleID=206105179

As such we can expect commodity servers to be sporting 512 cores per cpu socket by 2017. This is a future that can come none to soon as far as I am concerned. The only thing that is the least bit troubling is any article (like the one above) related to this MultiCore paradise, always casts it as a looming problem.

I shared this concern for a short while. The reasoning goes something like this, most apps are written as single threaded apps and each core will support only one thread at a time and each thread will use only one core at a time. So when we put that humble single threaded app on the 512 core machine only 1/512 of the processing power will be available to it. All those cores will be wasted!

But something about Agarwal's Corollory broke the spell. (Agarwal's Corollory probably won't stick, I bet it ends up as the "Even More Law") By stating exact numbers one is struck by a sense of "That's it?" Not to be greedy or unappreciative but I will have no trouble making use of 512 threads in an application.

But before even diving off into the specifics for one app let me go look at the process monitor and see what is going on. Okay I have 90 processes running with 970 threads. So it looks like the typical app will use 10 or more threads all by itself.

970 threads on the machine, talk about a plate spinning act. Most people don't realize what the poor OS needs to do here. This is only a dual-core machine so only two threads can be running at a time. But each and every one of those 90 processes need to be responsive like they are the only one running on the machine. To do this the OS is under the covers preemting threads, packaging up their state, then selecting another thread to run, unpackaging it's state, reloading the thread, letting it run for a slice of the cpu and on and on. If the OS is successful it manages to get thru all 970 and back to the first one before anyone notices how sluggish the app is. Think of a plate spinning act were two guys are keeping 970 plates spinning.

Fast forward to the future and 512 cores means the plate spinning act will drop from 485 (970/2) plates to 2 (970/512). All of the context switching burden and performance hit will drop off to zero. So this poor machine could use those 512 cores now. In fact let me launch Visual Studio and SQL Server Management Studio. Now I am up to 97 processes and 1063 threads. Which raises another interesting feature of current software. A single application may result in several processes spinning up as well as multiple threads per process. No doubt large numbers of cores will be consumed by MultiProcess applications. Even if the programmers involved were unable to multithread their applications they could create several processes.

permalink

Sunday, May 4, 2008

AP Calculus Question

My wife is prepping her students for the AP Calculus exam. Her students have the fundamentals, so now she is guiding them thru past exams, dissecting questions, analyzing the answers. A pattern emerges on most questions. There are four parts questions after the setup.


The first part is usually a simple definition. The next two question parts seem to be trying to test comprehension of the subject. The last part appears to be unworkable by average students. Sometimes they use concepts or tricks that are not obvious or known to calculus students.

Question:

Find values for m and b in the equation: y=mx+b,
that will satisfy the equation: y' = y + 1


Solution:
Taking the derivative of y = mx +b : y' = mx
substituting into y' = y+ 1
m = mx +b +1

At this point everyone is stuck, because there is one equation with 3 variables.
The not so obvious trick is that we are solving for m and b but x could be anything.
So if the term involving x could be made to disappear, say by setting m = 0 then something might be solvable. So substituting in m= 0:
0 = 0 + b + 1

Thus b = -1.

So it is solvable at the end of the day but for the kids taking this exam what should we tell them? This is just a trick question and it really does not illustrate anything related to calculus.

Saturday, May 3, 2008

12 Basic Software Development Practices

The gap between programming or writing code and developing software is growing.
This paradox is driven by the proliferation of easy to use tools, libraries, frameworks and OSes designed to ease the burden of software development.

Regardless of why, my experience has been that the typical software development shop is not executing on basic fundamental software development practices. A trend of late that I find even more alarming is quite often many of the programmers and managers are not even aware of or do not insist upon implementing them. It reached the point that I have enumerated 12:

12 Basic Software Development Practices

1. Source Control
2. Issue Tracking
3. Bug Tracking
4. Work Item Tracking
5. Environments
6. Integration Builds
7. Automated Builds
8. Unit Testing
9. Regression Testing
10. QA
11. Versioned Releases
12. Setups

So for a while my thinking was I can just enquire about these 12 before taking a new contract or engagement and all will be well. It is not surprising that I was always told "oh yeah we have all of that.." And it is also not surprising that expectations did not match reality. So I formulated follow on questions to determine the depth of practice.

Questions to determine depth of practice

1. Source Control

Where is the code kept?
Central location?
Individual machines?
Which version is authoritative?

2. Issue Tracking(Issues are a catch all term for everything that does not fall into the bug or work item buckets. They include but are not limited to 3rd party software issues, network issues, development machine issues.)

Is there a Central reporting tool / repository for issues?
Is there an established work flow?

3. Bug Tracking

Is there a Central reporting tool / repository for bugs?
Is there a bug workflow which includes states for:
Initial
Approved
Denied – Feature, works as designed
Denied – Not reproducible
Fixed
Verified
Reopened
Closed

4. Work Item Tracking

Is there a Central reporting tool / repository for new feature and enhancements?
Is there a work flow which includes states for:
Initial
Coded
Test
Failed
Accepted

5. Environments

Do separate environments exist for each of?
Development
Integration
Quality Assurance
User Acceptance Testing
Production

Is developer access to environments higher that Development limited?
Is developer access to environments higher than Integration denied?
Does QA or Change Management own the process of moving releases from one environment to the next?

6. Integration Builds

Does the code build from source control?
Does it build on a machine that is not a development machine?

7. Automated Builds (hourly/daily)

Is the code being built a regular basis?
Is this build done by an automated process?

8. Unit Testing

Do developers check in unit tests with their code?

9. Regression Testing

When code is built for release are all unit tests run?
Bug, Feature and QA tests?

10. QA

Is QA done by someone other than developers?
Does QA write automated or unit tests to demonstrate bugs?
Does QA write automated or unit tests to test common test cases for a feature?
Does QA write automated or unit tests for security and other required practices?

11. Versioned Releases

Is there a version number definition?
Are the products tied to the version number with each release?
Is the version number incremented with each release, both internal and external?
Does QA report bugs and test features according to version number?

12. Setups

Does each application have an install program?
If it does not have an install program are there instructions documented for manually installing the application?
After the install does the program run or is more intervention required?

Footnote. Microsoft Team System can provide them all.

permalink

Friday, May 2, 2008

MultiCore MultiBrain

AMD MultiBrain machine...

Or something like that. It popped up on my radar twice in one day and after the typical Google research it has become apparent this is simply another way of saying MultiCore. This term will not help and will only confuse.

The second time it occurred was on NPR during drive time. I cannot recall the show or episode but I do recall thinking "this is new I need to chase down what AMD is up to.."

The first time was on a Wall Street Journal podcast (I think). So you can see the pattern, venues for popularizing technology will start to use this Multi-Brain when they should say Multi-Core. No doubt some editor felt Multi-Core was too complex. When they were told that details. Hey look the new CPU designs have taken the core functionality of the CPU and duplicated it withinin the CPU multiple times.. the editor trotted out the old 1980s standby CPU=brain and since this gives you multiple CPU capability it must be "MULTI-BRAIN!"

Unfortunately when I hear Multi-Brian I think multiple CPU. And what is worse since no one says brain for cpu anymore, I'm thinking there is something new afoot. AMD must be value adding to the CPU to make it more brainy!

permalink