How to waste $10 billion.

Allies pledge $10 billion to boost Itanium | CNET

Intel, Hewlett-Packard and seven other server companies (Unisys, Silicon Graphics Inc., NEC, Hitachi, Fujitsu, Fujitsu-Siemens and Groupe Bull.) will spend $10 billion through 2010 to try to increase adoption of the Itanium processor.

And the kicker…

Itanium has been taking share from both IBM power and Sun Sparc. We’re on the right trajectory, but we want to go faster,” Tom Kilroy, general manager of Intel’s Digital Enterprise Group, said at a press event here.

Adrian Cockroft on “Capability Utilization”

Performance Information and Tools

My observation is that utilization is useless as a metric and should be abandoned. It has been useless in virtualized disk subsystems for some time, and is now useless for CPU measurement as well. There used to be a clear relationship between response time and utilization, but systems are now so complex that those relationships no longer hold. Instead, you need to directly measure response time and relate it to throughput. Utilization is properly defined as busy time as a proportion of elapsed time. The replacement for utilization is headroom which is defined as the unused proportion of the maximum possible throughput.

Bruce Lindsay on the future of databases

developerWorks : Blogs : Grady Booch

Distributed two-phase commit will be avoided by recoverable messaging to applications (via services) that consult and modify the database and send a recoverable reply. Database size will become a non-issue. We’ll see lots of low-latency asynchronous replication of reference data among databases serving various applications and their associated service interfaces.

SQL Performance Tuning DON’Ts

AskTom “”Not to do” Things”

  • don’t accept string literals from end users and concatenate them into your SQL (eg: DO use binds in almost all cases)
  • don’t test on an empty database or a database with a small percentage of the real system. Importing statistics from a “real” database doesn’t work. You need real data volumes if you want to see what will actually happen in real life.
  • don’t test with a single user, scalability issues will never make themselves apparent.
  • don’t say “we’ll just whip it out and see what sticks“.
  • DO think about scalability from day 1.
  • DO design your system.
  • DO NOT just wing it as you go along. That might sound like fun, but you’ll be working on that same system forever fixing it, patching it, trying to make it work.
  • don’t take any advice from experts unless you see compelling evidence that what is being suggested actually applies to you.
  • find someone willing to explain “why” – so you can increase your knowledge but also understand when something MIGHT NOT APPLY (because nothing is ever 100% true all of the time)
  • don’t try to optimize by hypothesize. (it rhymes 🙂 I “feel this will be slow” – nope, don’t “feel” – show it to be slow, then optimize it. Point I’m trying to make is I see people “feeling that doing ‘X’ will be slow” and trying to design a system around not doing ‘X’. Show us that ‘X’ is too slow for you (typically trying to avoid the use of a database feature because they have heard “it is slow”- eg: auditing.)

Annotations: startup performance impact on a (Servlet 2.5) server

New features added to Servlet 2.5

Annotation performance
Whether or not you use annotations—and especially if you don’t—it’s important to understand the performance impact they can have on a server at startup. In order for the server to discover annotations on classes, it must load the classes, which means that at startup, a server will look through all the classes in WEB-INF/classes and WEB-INF/lib, looking for annotations. (Per the specification, servers don’t have to look outside these two places.) You can avoid this search when you know you don’t have any annotations by specifying a full attribute on the <web-app> root like this:

<web-app xmlns=""
version="2.5" full="true">

When you’re not using annotations, this reenables lazy class loading.

Hotmail : 100 sysadmins, +10,000 servers, multiple petabytes of data.

A Conversation with Phil Smoot

At every level—failing LUNs (logical unit numbers), failing servers, failing services, failing networks—you want to make sure that all calling services don’t tie up all of their resources on any particular component. And you want to make sure that operationally you can isolate a broken or failing hardware component or server or service from the rest of the network. Most of this experience comes with time. We try to build our bench with folks who understand what mistakes not to make.

building JNI dlls: use lowest compiler optimization level, increase only if proven beneficial!

Kelly O’Hair’s Blog: Compilation of JNI Code

… dynamic loading of native libraries can be tricky. Some of the compilation issues relate to the library being loaded into a Java process and causing conflicts in the way the JDK was compiled itself, or the way the JDK assumes code was compiled (sometimes this is considered a JDK bug, but it depends on a great number of factors).

All libraries loaded into java are assumed to be MT-safe (Multi-thread safe). This means that multiple threads could be executing the code at the same time, and static or global data may need to be placed in critical sections. If you don’t know what MT-safe means, you should do some research and make sure you understand what this means, it is a critical concept to understand when creating JNI libraries and writing JNI code. So if there is a compiler option to select MT-safe generated code, you will need it.

Compiler optimizations are always an issue and you need to be careful with high optimization levels. The safest route is to stick with just -g compilations, then use lowest optimization level you need, increasing it only when your native code and the overall application really benefits from increased optimizations.