Monday, November 28, 2011

Shopping for the End Times

I don't know who the Antichrist will be, but while shopping today at Goodwill, I learned his suit is ready.

Wednesday, November 23, 2011

Arduino Boot Camp III


We had our third Arduino Boot Camp on Saturday. I got my second project working - a green light that slowly increases and decreases in intensity. It's not much, but it's a start. And programmers love all our babies, no matter how small.
The source code:
const int redLedPort = 11;
const int greenLedPort = 10;
const int blueLedPort = 9;
int minBrightness = 0;
int maxBrightness = 255;
int deltaBrightness = 10;
int currentBrightness = 0;
int wait = 200; //milliseconds
unsigned long previousMilliseconds = 0;
unsigned long currentMilliseconds = 0;

void setup() {
pinMode(greenLedPort, OUTPUT);
pinMode(redLedPort, OUTPUT);
pinMode(blueLedPort, OUTPUT);
Serial.begin(9600);
Serial.println("Started");
previousMilliseconds = millis();
}
void setColor(int red, int green, int blue) {
Serial.print(red);
Serial.print(", ");
Serial.print(green);
Serial.print(", ");
Serial.println(blue);

analogWrite(redLedPort, red);
analogWrite(greenLedPort, green);
analogWrite(blueLedPort, blue);
}
void loop()
{
currentMilliseconds = millis();
if(currentMilliseconds > (previousMilliseconds + wait)) {
setColor(currentBrightness/2, currentBrightness, currentBrightness/2);
currentBrightness += deltaBrightness;
if(currentBrightness >= (maxBrightness - deltaBrightness) || currentBrightness <= (minBrightness)) { deltaBrightness *= -1; } previousMilliseconds = currentMilliseconds; } }

Monday, November 21, 2011

sp_executesql executes extremely slowly

On a legacy system, we had a query that in query analyzer is very fast, around a millisecond.
But in production it is slower than molasses in winter, up to three seconds.
The table being queried is simple and tall, around 25 million rows, with a clustered index on "bid":
CREATE TABLE [dbo].[BookHistory] (
[bid] [varchar] (64) COLLATE SQL_Latin1_General_CP1_CI_AS  NOT NULL,
[source] [varchar] (16) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
[src] [varchar] (32) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,  
[bookCode] [varchar] (32) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
[bookStatus] [varchar] (32) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
[actionTime] [datetime] NOT NULL 
) ON [PRIMARY]
GO
CREATE CLUSTERED INDEX [BID_INDEX] ON [dbo].[BookHistory] 
(
 [bid] ASC
)
GO
The code is trying to retrieve all the records, typically a dozen, for a particular "bid" which should be faster than greased lightening.
exec sp_executesql N'SELECT bookCode as name,bookStatus,
DATEDIFF(DAY, actionTime, GETDATE()) as days  
FROM BookHistory with(nolock) 
WHERE bid = @bid AND source = @source AND src = @src'
,N'@source nvarchar(5),@bid nvarchar(10),@src nvarchar(4000)'

As the more astute of you can probably see now, the issue is with the data type of "bid". In the database it's a varchar(64), but in the trace it's a nvarchar(4000).
In the C# code the type of the parameter of "bid" was left to the default, which is "nvarchar", so...... SQL server was thinking, "I've got this nifty little index for 'bid', too bad I can't use it since the incoming query is an 'nvarchar', oh well, I guess I'll do a scan and convert all those 'bid' thingys to nvarchar on the fly." That's why the SQL statement was glacial.
For the interim, I patched the SQL statement with a cast, since that can be done without a full deployment, to be
exec sp_executesql N'SELECT bookCode as name,
bookStatus,DATEDIFF(DAY, actionTime, GETDATE()) as days  
FROM BookHistory with(nolock)
 WHERE bid = cast(@bid as varchar(64)) AND source = @source AND src = @src',
N'@source nvarchar(5),@bid nvarchar(10),@src nvarchar(4000)'
And the statement that used to take a few seconds now takes a few milliseconds. Mystery solved.
Postmortem: The clue was that the execution plan was doing a scan instead of using the real index.
Moral: Always set the type of the SQL parameter when using SqlCommand directly.
Moral2: Don't use SqlCommand directly. Use NHibernate or some other ORM so it can figure out all that stuff for you (Although in legacy systems you don't always have that luxury).

Thursday, November 17, 2011

Sql Server Management Studio intellisense not working

My intellisense in Sql Server Management Studio (SSMS) stopped working months ago and I finally decided to get it back.
Turns out that installing VS2010 R1 disables intellisense in SSMS. Who knew?
The solution? Install the cumulative update 7 for SqlServer2008.
1. Go to here and select "SQLServer2008R2_RTM_CU7_2507770_10_50_1777_x64" if you are running the 64-bit version (it's fifth from the bottom). Download it. Well, you can't just download it. Frustratingly, you have to enter your email address and then they email you the link.
<grip text="Why can't microsoft just have a menu option in SSMS under "Help" for "Update to latest patches" - even Quicken does that." />
2. Click on it to execute it and wait. Wait some more. Then realize, it's not going to run the installer it just unzips it to a directory.
3. Go to that directory and run "SQLServer2008R2-KB2507770-x64.exe" and that will install the update.
4. Although it doesn't encourage you to do it, reboot your system.
Worked for me.

Monday, November 14, 2011

Aparapi - Accessing the Power of the GPU from Java

Gary Frost from AMD presented the Aparapi library to the Java Users Group last Tuesday, Nov 8th, 2011. The Aparapi library makes it easy to write parallel java programs on the GPU using OpenCL. Gary has done some very impressive work with Aparapi. Before Aparapi programmers had to code large blocks of repetitive boilerplate code to get their algorithms to be executed on the GPU. Aparapi does that for us automatically.
If a GPU is not present Aparapi will use separate threads to enhance performance.
Gary showed an example of the nbody problem where two galaxies collide to demonstrate the power of the GPU.
Some of the performance hit of transferring data to and from the GPU and the CPU should vanish when AMD's fusion chip, which has the GPU and CPU on the same chip, becomes widely used.
Gary mentioned that we are entering a world of throwaway CPU power. He gave the example of parsing an XML file by dividing it into two parts. One thread would parse the first section, but the second part of the file might depend on the 67 states that the first part might be ending in. The solution is to have the CPU/GPU parse the second half in all 67 possibilities, so that when the first part is done the second part will already be finished.