Monday, November 17, 2008

FastMM's Multicore Performance Scaling

Having used .NET's asynchronous socket, filestream, WinForm's Invoke / BeginInvoke and fell in love with its design and knowing that it uses IOCP / threadpool WinAPI underneath the hood, I decided to write my own in C++, using C++ Builder. Everything went well until I started testing. I realised that my dual core machine doesn't seem to get full utilization when I'm running some of the most intensive tests, which easily causes the threadpool to spawn extra threads to serve the load.
I started investigating and eventually managed to reproduce the issue with just the following code:
void __fastcall TAnsiStringTesterThread::Execute()
{
     AnsiString str;
     for (int i=0; i<10000000; i++)
     {
          str = " something ";
     }
}
Delphi doesn't seem to suffer the same problem at first sight when I ran the above in its Delphi equivalent. That is until I tried the following (instead of assigning straight to a literal string, I did an IntToStr): procedure TAnsiStringTesterThread.Execute; var i: Integer; str: string; begin for i := 0 to 10000000 - 1 do str := IntToStr(10000000); end; The similarity in both these codes is that they both call LStrFromPCharLen, which eventually leads to a call to GetMem and when the string's ref count goes back to zero, a call to FreeMem. Could GetMem and FreeMem be the culprit? As it turns out, yes. To put the hypothesis to the test, I did a tight GetMemory / FreeMemory loop in 2 threads and observed the CPU usage. Unsurprisingly, only 50% of my dual cores are utilized, even though the utilization spreads across both quite evenly. In my search for a better memory manager, I came across the Intel Threading Building Block library. Among other useful things like parallel loops, concurrent hash maps and lock-free queue, it has a scalable memory manager. With that, I wrote a BorlndMM.dll wrapper and called it TBBMM. Here's the result:



In the single-threaded test, the TBBMM is only faster than FastMM by 20%. However, from 2 to 8 threads on a Dual Core machine, the improvements are from 2.25x to a staggering 2.5x.
For more information / to download TBBMM, visit my TBBMM webpage.

Friday, October 24, 2008

C++ Builder 2009 Compiler Bug Fixes... Finally!

Came across this http://dn.codegear.com/article/38715 today and found that CodeGear has finally fixed all the compiler bugs I've filed in 2005 (3 years ago). Heck, if you do a search for "Zach Saw", you'll find that one of the entries has a path to my "My Documents" ;)

Anyway, kudos to Embarcadero to let the team work on these C++ compiler bugs which turned most of us away to the clutches of Microsoft Visual Studio / C# back then. These compiler bugs were by no means trivial - they were so nasty that they simply rendered any effort to create a stable 24/7 server impossible.

With all these bugs fixed, C++ Builder would now be the definitive tool to create a proper server (especially versus C#) - low memory usage, blistering fast and it shares most of its the design patterns with .NET (in fact, one would argue that C# is heavily inspired by Delphi / C++ Builder). I spent just 3 days writing a framework which is intimately similar to .NET's asynchronous IO design and its Control.BeginInvoke / Control.Invoke design. In case of the latter, I found that with C++, you get more type checking than you would with C# in passing parameters to the callback. In fact, even the return type for EndInvoke could be type checked with RTTI (during run-time unfortunately, but in any case, is automatic unlike .NET).

[ILINK32 Error] Fatal: Unable to open file .obj

If you have been playing around with Delphi packages compiled for C++ Builder in Borland Developer Studio or CodeGear RAD Studio, you'll have undoubtedly run into this error message. You could also run into that if you are not dealing with Delphi packages (e.g. using a run-time package, static library).

Let's start with Delphi packages.

What is suspicious is the way the linker reports the error - such as [ILINK32 Error] Fatal: Unable to open file MyComponent.obj. You search the entire hard drive for MyComponent.obj only to find the closest thing to be MyComponent.dcu, which is compiled in Delphi from MyComponent.pas. So how and where do you get MyComponent.obj?

The answer is, this .obj file is actually in a container with the extension of .LIB. If your package with MyComponent.pas is called MyPackage.bpl, then you need to look for MyPackage.lib. While an installed package should get automatically added to the default list of included packages when you create a new project file, it doesn't always happen (i.e. bug). All you have to do then is manually open up your current project file (.cbproj) in a text editor and add MyPackage.lib into the tag. Reload your project and you should be able to link successfully.

If you are not dealing with Delphi packages and you get that error message, that means you're trying to use a static library, or a run-time package (and some times even design-time package which are not dropped onto a form / data module). This could easily be fixed and is the expected way linkers work. There are 2 ways you can fix this:

1) Add the lib file to the project (I don't like this, as you rely on the user of the static library to remember to add the lib file every time they use it in a new project)
2) Add the following line to a header file which is guaranteed to be included when the static library is used:
#pragma comment(lib, "your library name.lib")

That's it.

I haven't tried C++ Builder 2009, but I hope CodeGear have found a better way to do this. Actually, I'd suggest that they simply make all Delphi compiler generated C++ Builder files include that #pragma comment(lib, ...) line so that we never have to muck around with the .cbproj files any more. In fact, this would also mean that there will be no annoying messages that prompts you to remove packages you don't need when you create and try to save a new project.

Wednesday, October 8, 2008

BUG: April 08 Hotfix for CodeGear RAD Studio Skips Update

I've run into yet another Borland / CodeGear bug - this time with the April 08 Hotfix installer. It says it has successfully applied the Hotfix but it really hasn't and you would have noticed that on the installation summary screen, it says "Current Hotfix Level: 1", instead of 0.



What happened?

Well, to begin with, you'll always run into this issue everytime you have applied this hotfix and then reinstalled CodeGear RAD Studio (i.e. uninstall and reinstall from the original DVD). The registry settings for the HotfixLevel doesn't get cleared.

Workaround is simple. Go to your registry (plenty of registry actions with Borland / CodeGear - I'm sure all of you are used to it by now), and open the following key:

HKEY_LOCAL_MACHINE\SOFTWARE\Borland\BDS\5.0

(Note: Assuming you installed RAD Studio for all users. If you've installed it for current user only, look into HKEY_CURRENT_USER and look for this key as well)

In there, you'll find an entry called HotfixLevel and it'll have 1 as its value. Change it to 0 and restart the hotfix. It now correctly detect that it hasn't updated the files and proceed to update them.

Delphi Packages not appearing in C++ Builder personality

One of the new features in CodeGear RAD Studio 2007 (actually Borland Developer Studio which is the 2006 version of RAD Studio) is the ability to get the Delphi compiler / linker to generate all the files required by C++ Builder (.hpp, .obj, .lib etc.) for a Delphi package, without having to create the equivalent C++ Builder package.

Unfortunately, there's one ugly bug that has plagued this feature - you may find that the components in the package that you've installed does not come up in the designer's Tool Palette. This bug was first reported by yours truly against BDS 2006 and it appears that it hasn't been and won't be fixed even for RAD Studio 2009. That's more than 3 years since I've reported it in QC! Wow!

If you have left all settings to the default when you create a package in Delphi (which you most likely will), you will find that the components you've registered in the package won't appear in a C++ Builder project. That is simply because you have not specifically told the linker to "Generate all C++Builder files". So you would go back to the Delphi package and select that option in the Linker output and recompile / reinstall. This time, however, you would expect the installed components to appear in the Tool Palette when you try to use it in C++ Builder... Surprise surprise, it's not there!

It's as though once the IDE has decided that it is a Delphi-only package, it will remain a Delphi-only package forever. Note that if you uninstall and recreate and then reinstall the entire package, it will still be invisible to C++ Builder - that is, until you've renamed the package. That's because the IDE remembers the package name!

The other cleaner workaround would be to go to your registry via Regedit.exe and remove all the following entries to your package (say, MyPackage.bpl).

Key:
HKEY_CURRENT_USER\Software\Borland\BDS\5.0\Known Packages\
Entry:
Look for the entry with [path]\MyPackage.bpl and remove it

Key:
HKEY_CURRENT_USER\Software\Borland\BDS\5.0\Package Cache\
Look for the key called MyPackage.bpl and remove the entire sub-key


Key:
HKEY_CURRENT_USER\Software\Borland\BDS\5.0\Palette\Cache\
Look for the key called MyPackage.bpl and remove the entire sub-key


Remember to first shut down CodeGear RAD Studio before changing the registry keys. Once you have removed the entries, restart RAD Studio and this time, remember to select "Generate all C++Builder files" for all build configurations (e.g. Debug and Release) before you install the Delphi package.

And in the future, keep in mind to always set the linker to "Generate all C++Builder files" or set that as your default for all build configurations.

ps. Yup, they haven't rebranded it to CodeGear in the registry - it's still Borland as we know it! :)

Wednesday, October 1, 2008

QuickPHP v1.4 Adds Apache Mod Capabilities

Those who are familiar with Apache mods will welcome this new addition to QuickPHP. What sets QuickPHP apart though, is that it uses PHP itself to implement the mods.
I've detailed its inner workings before on this blog here.
You can download QuickPHP here and read the mod documentation here.
While I've made the interface to create mods available, there isn't any mods available yet. This is the time QuickPHP could really use your contributions. Someone to write the equivalent of mod_rewrite, mod_log_referrer, mod_log etc. in PHP. Anything that might be useful to you, is potentially useful to others too. So don't be shy to contribute your code in the forum!

Thursday, September 11, 2008

Project Offset will be Larrabee Exclusive

It's been officially confirmed by one of the developers in the Project Offset Team (Paul Tozour) that Project Offset will be exclusive to a "select Intel-based platform".

From Project Offset's Forum:

"Right now, I can only pass on the official word:

Offset will be exclusive to select Intel-based platforms.

We look forward to being able to discuss more details in the future."


What this means is that Project Offset has been specifically designed to run only on Larrabee - utilizing every bit of the massive VPU to do graphics and physics that are traditionally impossible without heavy tweaking on the DirectX API.

But how is this game important? It's only one game. And more importantly - why is Project Offset a key player in the success of Larrabee?

Project Offset is firstly a game development tool / package with its own engine and secondly a game, much like the Unreal 3 Engine. Ageia employed the same strategy with its Physx and while its card has not been much of a success, the strategy worked - at the time of this post, there are 100 over games supporting Physx, with the majority of them based on Unreal engine. Intel has bought everything they need to own a game engine outright - first, they bought Havok, and early this year (February 2008), they bought Project Offset.

Larrabee was seen as a saviour for Project Offset's developers -- they were constantly wrestling with the DirectX API to get the effects they desire. They have also worked very closely with other renowned developers in the field to try and solve some of the problems they encountered but none came up with any good answers. This simply proves that there are more developers hitting the walls of limitations imposed by the DirectX API than we previously thought. Larrabee frees them of this struggle and gives them the freedom to implement anything they could imagine. While it is easy to see that the whole game industry is behind DirectX at the moment, you are completely restricted to the feature set it gives you. Even with the shaders becoming more powerful, it is nowhere near the prowess, flexbility and maturity of x86. Perhaps you could be arguing that nVidia's CUDA could accomplish the same thing. Maybe. But nVidia hasn't bought / started to develop its own game engine yet. Yes, they have the Unreal 3 Engine which uses Physx but it's bound by DirectX. A quick visit to the CUDA forum can confirm that no one has even started working on a custom graphics engine based on CUDA. So on the one hand, we have a plethora of DirectX games - almost every game engines out there are based on this API. Smaller game studios simply buy games engines to create games. We exclude these from the equation as they will adopt which ever standards the underlying engine relies on. So this leaves us with the major players - those who are creating their own engines and sell them for a profit. They're the ones who're being limited by the DirectX API at the moment. They're the ones hitting the walls of creativity set by nVidia, ATI / AMD and Microsoft. They spend years coming out with techniques that utilize the API to do forefront, eye-popping graphics engines. Imagine if they're thrown a life line. Imagine the day they don't have to trick the API to perform something clever. That's where Larrabee comes in.

Don't be fooled by the statement made by the architect of GTX 280. He said they've considered creating something like Larrabee but found that it's not a viable design. That could be true a few years ago or even now, but by next year, nVidia could be heading down an opposite path but ending up with the same design as Larrabee. GTX 280 contains a fair bit of general purpose cores, which sets it apart from the generations before. Larrabee's design is basically the natural progression of GTX 280.

Come next year, I wouldn't be surprised if nVidia's successor to the GTX 280 is somewhat similar to Larrabee - except perhaps the most obvious: it is not x86.

In any case, Intel's strategy at this point is nothing to scoff at - they've built a solid foundation for Larrabee. If anything, they're ahead of nVidia/CUDA at this stage of the game.

For those who are interested, reading through Project Offset's developer blogs and forums will give you an insight into the life as a graphics engine developer.

Friday, September 5, 2008

Using PHP Script as a Plugin

A discussion in my QuickPHP forum has resulted in this new idea where PHP Script can simply be used to create a plugin. Rather than the conventional DLL plugins, we simply get the host to call into the PHP script and get the results back from it.
Of course, a DLL plugin has its advantages (i.e. compiled and runs faster), but in cases that require you to allow your user to configure part of your software with great flexibility, instead of using INI / XML files, a PHP file is definitely not a bad idea. That would bridge the chasm between a DLL plugin and a simple config file. What's more - you only need to deploy the php DLL file (e.g. php5ts.dll) along your software and that's it! You get PHP's powerful regular expression out of the box!
In the discussion, I suggested that PHP could be used to implement the entire Apache's mod_rewrite module, without the need for an external INI file with an input and output regular expression. Users would simply need to write PHP script to manipulate the client request info and return it back to the server. This also means that QuickPHP's mod_rewrite functionality could be so much more powerful than Apache's. In fact, I think one main PHP file is all I need - the main file, in turn could simply call into other modules to work its magic if it's required. If QuickPHP_ReqMod.php is not found, QuickPHP reverts back to its basic mode. No additional DLLs - no unnecessary memory usage.

Friday, August 29, 2008

Testing PHP on Windows - in 5 seconds! Without installing Apache, IIS or even PHP.

Testing PHP on Windows in less than 5 seconds without installing Apache, IIS or even PHP. Is this possible?
Yes it certainly is - and it's FREE.
QuickPHP is designed spefically for this purpose.
Here are the steps to test PHP on Windows in less than 5 seconds:
  1. Download QuickPHP WebServer (quickphp_webserver.zip) from http://www.zachsaw.com/?pg=quickphp_php_tester_debugger
  2. Unzip the file into C:\QuickPHP
  3. Run QuickPHP.exe from the folder
  4. Hit Start.

That's it!
You can now test your PHP webpages by browsing to http://127.0.0.1:5723 and QuickPHP will run 'index.php' in 'C:\' folder (of course, make sure you have a file called 'C:\index.php' - if not, copy and paste the following code and put them into 'C:\index.php').
If you wish, you can change the webserver's root to your local webpage folder and the default document name to point to your own index file.
index.php: 
<?php phpinfo(); ?>
If you need any help, you can visit the QuickPHP forum.

Wednesday, August 20, 2008

Larrabee with FPGA pledge

Following up on my pledge to Intel for including a real-time reprogrammable highspeed FPGA on Larrabee, it looks like it's definitely very useful in a number of applications. With texture filtering being done in hardware with Larrabee running as a GPU, we could reprogram the FPGA to do motion search for H264 encoding. We're now going into a new era where computer engineers are very good in both software and hardware design.

I suggested the idea of including an FPGA as part of the CPU to a fellow employee / manager back in Intel but unfortunately, it never got any attention. That was back in year 2001. 7 years following that, we're now seeing companies making full use of FPGAs to accelerate applications that aren't efficient to be run on a CPU. Larrabee solves some of the things I said an FPGA would solve, but there are definitely several other applications out there that would benefit from an FPGA. I also pointed out that Intel should come up with a library (i.e. hardware design) that developers can simply load into the FPGA to accelerate specific types of algos.

Take a look at this.

Larrabee picks it up where CUDA fails

Having read most of the publications by nVidia, ATI/AMD and Intel made available to SIGGRAPH, I have to say, I'm a believer in Larrabee. Most of the problems that plagued CUDA involves having to design and offload only certain parts of the algo which can be suited for GPU and which is small enough in terms of bandwidth utilization across PCIe over to the GPU, and then getting the results back from it via the same path.

The reason this is even being discussed lies in the fault of the whole GPGPU concept. The GPU is good at one thing - being fed textures (compressed) and command that are then pumped through its fat pipelines to get results (rendered image). Use it for something more generic, we have to deal with issues such as the PCIe bandwidth and having to feed the onboard frame buffer with enough contiguous data to work with. Say we have infinite video RAM. Even then, we'll still have to do some parts of the algo on the CPU as the GPU is just incapable of doing things like scalar operations and sequential branching algos (namely tree algos - heck, CUDA doesn't even do recursion) effectively. With a measly PCIe between CPU and GPU, any performance gained will most likely be offset.

CUDA is at best, a DSP SDK. nVidia's attempt at using its GPU as a very basic DSP. Nothing more. Yes, you may find that offloading some parts of, say, a H264 encoder will give you some gains. But if you go further, and implement say, anything beyond the baseline profile, you'll run into troubles. You'll get some gains no doubt, since the GPU is always a free agent if it's not being utilized. Is it worth the effort though? Hardly. The x264 developer has gone out to say CUDA is the worst API / language he's ever encountered (particularly with the threading model).

Larrabee, however, will change the landscape quite a bit. All the above mentioned problems, are exactly what Larrabee seeks out to solve. OpenMP for threading model, much higher level of abstraction between CPU and Larrabee (it's capable of running Pentium x86 instruction sets, so there's no need to go back to the CPU as frequently as GeForce / Radeon), and SSE vector instruction sets -- these are all directly targeted at the downfalls of CUDA!

When Pat Gelsinger said CUDA will just be a footnote in computing history, nVidia was a fool to laugh it off. It's already happening. Perhaps Wiki should start deleting their CUDA pages and start footnoting GPGPU pages with a short and sweet "meanwhile, there's CUDA" line. :)

Thursday, August 14, 2008

Larrabee and TFLOPS SP / DP - the TFLOPS race BEGINS!

There's much confusion over the upcoming Larrabee chip from Intel. It seems that most people who've tried to calculate the peak performance of the chip in terms of TFLOPS couldn't come up with the 2 TFLOPS Intel claimed Larrabee would achieve.

Larrabee's in-order cores are capable of processing a peak of 16 SP (single precision floating point) data per clock (512-bit VPU, hence 16 SP or 8 DP). At 2GHz, 32 cores, you only get 1.024TFLOPS SP (2GHz * 32 * 16SP). So how come Intel is claiming it is capable of 2 TFLOPS at that configuration?

Well, here goes. 1.024TFLOPS is the peak for most SIMD instructions, but if we take the MULTIPLY-ADD instruction into account (which Intel implemented recently in its SSSE3 - or SSE4 for the non-informed) then we would multiply 1.024TFLOPS by 2 - hence giving Larrabee a peak performance of 2.048 TFLOPs SP. Yes, it's Single Precision Floating Point (i.e. 32-bit) and not Double Precision (i.e. 64-bit) as some people are claiming it is. For DP, Larrabee would get a peak of 1.024 TFLOPs.

Before you go saying that Intel's cheating, the 4870 HD also implements the same instruction and its 1.2 TFLOPs SP performance is calculated for this specific instruction as well (same as Larrabee). A high end 48 core Larrabee would give a peak performance of 3 TFLOPs at 2GHz.

Interesting thing about the MULTIPLY-ADD instruction really - it's patented by 2 Japanese persons. This is a single cycle instruction that does multiplication and add - it's obviously very beneficial to vector calculations and applications like GPUs largely depends on this particular instruction.

I can see things shaping up quite nicely for Larrabee. Now that the GHz-race era is behind us, let the TFLOPS race begin!

nVidia gives Larrabee its blessing for CUDA and Physx

Just a thought - nVidia has said that its CUDA will run on x86 too (duh, that would be simply an x86 C compiler wouldn't it?) and since Physx runs on CUDA, that means nVidia has (probably inadvertently) given its blessing to get Physx running on Larrabee. Now that wouldn't be a bad thing at all.

Very generous of nVidia. ;)

Highspeed FPGA to complement Larrabee

Before I joined Intel, I've always had this idea in my mind - to have a highspeed FPGA as a coprocessor. I think this is a much better time to propose this solution to the world than it's ever been. With the buzz going around Larrabee and its need for a fixed function unit such as the rasterization unit for GPU, it would be so much more flexible if this is implemented as a block of FPGA. The driver is then responsible for converting this block into whatever the application sees fit. Anything that could not fit in that cGPU paradigm can be hardware accelerated via the FPGA block.

Any take on this, Intel?

About Intel Larrabee

About Larrabee and Larrabee vs GeForce / CUDA or ATI / CTM (without repeating what you could look up on other sites):

1) Michael Abrash, Tim Sweeney and John Carmack are all on board Intel's software team for Larrabee. This should give them a pretty solid team (understatement) for driver development.

2) A quote from GCDC'08: multi-thread your DirectX code and drivers. "3. Direct 3D runtimes and drivers account for 25-40 percent of CPU cycles per frame. This needs to be reduced in order to push performance!"

The freedom of offloading these 24-40 percent to Larrabee and leave the CPU to process everything else is something quite significant. This is, however, something they're still working on, as some calls involve the OS kernel and is not the natural way things happen as it stands with Larrabee on PCIe. Again, the ultimate goal is to get Larrabee sitting on your motherboard running as a co-processor, in which case scheduling will be done by the OS just as it would for a normal processor. The design decision to use software task scheduling is obviously two-folds.

3) CUDA does not support recursion (among several other things) - and will not likely be implemented in the near future due to hardware limitation - and not unless they implement a sophisticated prefetch hardware like Larrabee, it will most likely never happen.

Developers look for free-lunch. CUDA doesn't seem to provide that very well as it requires the algo to be completely rewritten - see www.gpuchess.com for example. With that said, it doesn't mean that nVidia can't emulate some of these features through other means like what GpuChess has done in its compiler. But, that leads me to the next point.

4) CUDA is a C-like language. That's good - but, how do you get C++ / C# / VB / Delphi / Java etc. developers to code for it? Not unless nVidia starts writing their own .NET IL runtime libraries, and VCL runtimes for their hardware (read: doesn't make sense financially and impossible in the limited time frame before Larrabee debuts). Larrabee gets all of these, for free.

The final point is what I'm excited about - because you're not restricted to just CUDA-C. You're free to develop in whatever language you're most familiar with. The best part is, with the binaries compiled for Larrabee (if you don't go for the exotic mnemonics of course), it'll be possible to run it on a machine without Larrabee, albeit much slower - but, at least it will run. I don't see any developers (bar hobbyists) getting excited over writing the same algo 3 different ways - CUDA, CTM and x86.

I don't know about the rest of you, but this looks like a very good idea to me. When I was working in Intel, I was going to propose something similar to Larrabee, but a more hardware solution. Maybe it's still possible. I'll leave that for the next post.

Tuesday, July 29, 2008

DotNUTs Framework - DirectoryInfo.GetFiles and DirectoryInfo.GetDirectories Callback / Event

The .NET framework is far from a refined / polished framework. It's nearly everything that I do in the .NET world, I run into a brick wall which eventually leads to writing my own wrapper around Win32.

Recently, I had to write a file searcher which is mainly used to look for files on USB hard drives. Each folder in the drive is made up of several tens of thousands of files and the drive is usually extremely fragmented (we're not talking about your conventional PC hard drive setup here). Using DirectoryInfo.GetFiles has a fatal flaw when it comes to enumerating files in a folder - it does not have a callback to report its progress, which means there's no way to cancel the search.

The Directory.GetFiles method works well for folders with not too many files but my case, enumeration can take up to a few minutes on a USB 1 connection. It doesn't matter if you implement this in the foreground or background thread - either way, your user will be forced to sit through the enumeration without any progress report or means to cancel the enumeration (unless you want to force a thread abort - still, there's no way to report progress).

So, WinAPI to the rescue, yet again. Using FindFirstFile and FindNextFile, we can easily achieve the same thing. The problem, however, is that in C#, you'll need to pinvoke these functions and the painful part is, you'll need to define everything that is simply a header file include away in C++. If that's not enough, you'll then need to verify that your definitions are correct in that it marshals the arguments back and forth properly.

Finally with all that out of the way, I stumbled upon yet another .NET framework bug in my unit test: DirectoryInfo.LastAccessTime LIES! (although I admit I don't need the LastAccessTime of a directory - but a bug's a bug)

Directory: C:\Windows\assembly
Date Accessed, as reported by Windows Explorer: 29/07/2008 2:08 PM
Date Accessed, as reported by WinAPI: 29/07/2008 2:08 PM (29/07/2008 2:08:38 PM)
Date Accessed, as reported by dotNUTs framework (DirectoryInfo.LastAccessTime): 29/07/2008 2:30:08 PM

It's not the LastWriteTime nor the CreationTime - that leads me to think that dotNUTs must have pulled the value out of its arse - which is not surprising given my experience with dotNUTs.

Does anyone else feel that the .NET framework is plagued with bugs? Or am I just a bug-magnet?

Or maybe someone should help Microsoft with their unit tests?

p.s. I'm not the first to coin the word dotNUTs.

Sunday, July 13, 2008

Borland / CodeGear Delphi / C++ Builder

It's sad really to finally see Borland closing a chapter on its very successful product - the VCL.
VCL has been the foundation of the .NET framework Windows Forms portion and in many ways still does a much better job - such as subclassing common controls, wrapping it into a 'drag-drop'-able control. Amazingly, Borland's core controls in the VCL has remained mainly unchanged since Version 5. With a few tweaks and additions to make it use Vista theme, it was made theme-aware in Vista.
What can we say about the .NET framework then? Well, as of Orcas (VS 2008 for the less informed), most controls are still not Vista themed - and most of the controls are around about VCL's version 3 standards in terms of features, bugs, and extensibility (ease of derivation, subclassing etc.).
Yeah yeah, there's WPF, but heck, some controls in WPF are even worse than their Windows Forms counterparts. And, using WPF simply means shooting yourself in the foot when it comes to finding the lowest common denominator - heck, I can't even run WPF apps at a decent speed on my 3 year-old laptop.
Microsoft's solutions - WPF or Windows Forms - are very half-hearted. They are neither very usable, or completely unusable. Most of the time, you'll have to invest a lot of your own time and resources into making something useful out of it.
Just for example, the TreeView control - both Borland (CodeGear) and Microsoft have it. Which is better? Borland (CodeGear) hands down (I expect Borland fan boys to cheer and Microsoft fan boys to boo now -- Darn those people - get a life!).
I worked with a guy with MCSE creditation last year (typical Microsoft fan-boy) and was told that anything I could do with Borland (CodeGear), I could do with Visual Studio / .NET framework. Sure. Try creating a multiselect TreeView that works in both Vista and XP, natively themed (i.e. Vista explorer mode theme and in XP, fall back to the active XP theme). In Borland (CodeGear), you'd simply drag-n-drop and set the multiselect property to true. Zero lines of code. I dare you - you know who you are, you Microsoft fan-boy - try doing that in Visual Studio and see how long it would take you to do something similar.
Yeah, sure, I did it eventually - check out my Advanced TreeView Control here - but it took me some time. And sure, the fan-boy could've simply said, well, I told ya, it could be done. And my reply would have been, "Yeah, it could've also been done in pure ASM!"
Good on ya Microsoft!

Thursday, July 10, 2008

The Online Vultures

I think I'm going to coin a new term here - The Online Vultures.

There are quite a few companies out there waiting for a domain, particularly a very highly ranked domain (in search engines) to

expire, and if the owner does not renew it, these companies pound on the domains, analogous to vultures ripping apart a dead

carcus. If you even try to get the domain back, you'll face the wrath of these vultures pecking you to death! Not that you should

let your domain expire, but that's a tale for another day.

How do these companies get money off sites like these?

Simple. They park a website on the expired domain, with a link oh-so-visible to "Enquire about the Domain". Following the link,

you'll be presented with a form which asks you for your contact details and the most important question of all, "How much are you

willing to pay for the domain?" (all these are provided by Domain Parking service).

Companies like the following is an example:

created-date: 2007-05-04 18:16:31
updated-date: 2007-09-06 13:48:52
registration-expiration-date: 2009-05-04 00:00:00

owner-contact: WA-VirtualStockLtd
owner-organization: Virtual Stock House LTD
owner-fname: Andrew
owner-lname: Waggins
owner-street: Midtown Building, 625, D.R. Walwyn Square
owner-city: Charlestown
owner-state:
owner-zip: 001
owner-country: SAINT KITTS AND NEVIS
owner-phone: +18696124652805
owner-fax: +18696124652805
owner-email: vsh.ltd@googlemail.com


This 'company' (whether or not it's real) owns 3,557 other domains.

Try www.micrusoft.com and www.micrasoft.com, or even www.iApple.com - they are all domain parked. Amazing.

Tuesday, July 8, 2008

Application Data - The WinForms Cookies

The application data folder (%USERPROFILE%\Local Settings\Application Data) is guaranteed to be read-writable and furthermore, it is user specific. Compounded by the ease of use via the IsolatedStorage class in the .NET framework, this sounds like the ideal place to save all your user settings doesn't it?

It is. Just an example, Microsoft Visual Studio caches all your project assemblies there. If Microsoft does it, it has to be correct, right? Yes and no.

While this is one of the most convenient storage locations, there is, however, the issue of - when and how do we remove the data we have stored in app data folder?

Microsoft Visual Studio suffers from this exact problem with its project assemblies - it really shouldn't, but it does. And when that happens, it will drive you nuts. What am I talking about? Here goes.

When a solution is loaded, all the referenced assemblies which have their property "Copy Local" enabled will be cached in %USERPROFILE%\Local Settings\Application Data\Microsoft\VisualStudio\\ProjectAssemblies\. The next time you open the solution, Visual Studio will check the cache for the assemblies and will not recopy the assemblies even if they are out of date. This happens very frequently if you use any third party components / controls with designer support - every time you upgrade to a newer version, your solution breaks as you now have an older version of the assembly in your cache, while the designer expects to work with the version it gets installed with.

The problem arises simply because Visual Studio does not remove the cache when the user closes the solution. It really should not retain the cached assemblies simply because these files will then remain indefinitely in your application data. Imagine every single time you create a little application to test out the behavior of certain components / controls (be it built-in or third party), VS caches all the assemblies. If Microsoft insist on not deleting the files, at least have the courtesy of doing it properly - this is analogous to Intel's Nehalem processor (the successor of Core processor) using stale data from its L3 cache!

In the case of VS, the solution is simple - recopy the assemblies each time the solution is reloaded.

Users of IsolatedStorage though, have a bit more problem in that if we use IsolatedStorage for application settings, when do we clean it up? When the software is uninstalled? Seems like quite a hassle - in the uninstaller, figure out which subfolder is for the application we have installed, enumerate through all the user profiles and delete the data files.

OK. Before we go any further, how do we figure out the subfolder of which our application has used for IsolatedStorage? Is this relative location even the same for all the users? Where's the documentation for this?

Too much hassle means one thing - most developers / companies will simply leave the files in IsolatedStorage after the application gets uninstalled. I suppose that's a bit better than leaving the data in your Windows registry back in the old days.