Saturday, December 23, 2006
Closing Thoughts on the 451 Group Conferences and Grid Computing
Two weeks ago, I attended the Enterprise Computing Strategy put on by the folks at the The 451 Group and wanted to share my closing thoughts...
There was lots of discussion around Sun Grid and many folks concluded that there was lots of marketing but it was not yet a market. Some of the characteristics that prevent it from becoming a market are:
Likewise, it does beg the quesiton of whether enterprises should continue to write applications in C++ and whether migrating all development to Java is better in the long run. It still does beg the question of enterprises who embrace scripting-oriented languages such as Ruby on Rails and how Sun sees these folks participating.
I also learned that the University of Tokyo has a 512 Core Processor which seems ripe to displace all the incumbent vendors who think that 16 core is meaningful. This project seems ripe to even displace startups such as Azul Systems who are at 24 cores. Hopefully, industry analysts will pay attention to the Grape chipset and its potential.
One of the conversations that I wished would have occured but didn't was the simple fact that the Mhz race is for the most part over. Nowadays, everyone is driving towards multi-core architectures which can be utilized by first understanding how to write multithreaded applications and secondarily, how to make tasks run in parallel.
The J2EE & .NET community have been telling us for so long that enterprise developers never needed to worry about the complexity of writing multithreaded applications and that the application server will handle everything. In hindsight, this feels like a trap as no J2EE vendor and their application server have the right architecture to scale across hundreds of cores.
Maybe this is an opportunity for industry analysts to stop talking about the need but to commission someone to tell enterprises how to write applications that scale in this regard. For analyst firms that provide coverage on grids, they can sometimes be impressed by the numbers of CPUs we can throw at a given problem-space but still are ignoring the fundamental question of how efficiently are we using them.
The biggest unanswered problem that attendees mentioned was licensing. Acquiring a software license on a grid is part of overhead. Minimally, this problem space has at least two components. First, licenses have locational barriers (can only be used in the US, a named site, etc) and second that there is no such thing as a license markup language in which a computer could interpret all of the characteristics of a license to make a decision at runtime. I would love to see the folks over at the OMG champion the creation of a license markup language. Maybe I will ping Richard Mark Soley, Peter Herzum or Phil Gilbert to see what it would take to get something started.
The more interesting thing that closed source vendors should pay attention to is if your enterprise customers are asking about grid computing and you aren't changing your licensing models to meet them, then they have one and only one choice and that is to go open source and displace you. Hopefully, folks from Oracle, BEA and IBM will step up and figure this out before your customers do...
| | View blog reactionsThere was lots of discussion around Sun Grid and many folks concluded that there was lots of marketing but it was not yet a market. Some of the characteristics that prevent it from becoming a market are:
- It may not necessarily be cheaper for an enterprise to outsource their grid
- Outsourced grids do not address security very well
- It requires applications to be written to a specific target platform
- Federated Identity: It seems as if outsourced grids at some level are still ignoring the fundamentals of identity management and keep their own identity stores. I suspect if Mark Dixon and Pat Patterson had a conversation with the grid folks at Sun, they may uncover a great opportunity to either noodle the notion of federated provisioning where the Fortune enterprise through their own identity management system would issue SPML requests to the Sun Grid. Depending on what is required on the Sun side, it may also require federated workflow aka BPEL? Alternatively, the Grid should become aware of not only basic identity via protocols such as OpenID and/or SAML but should also understand XACML to express how much resources am I authorized to consume.
- Full Disk Encryption: Many enterprises whenever they send data outside their enterprise employ encryption mechanisms. Usually if an enterprise sends out a tape they control how encryption is applied but this same mechanism is either non-existent and/or not under the control of the enterprise making it more difficult to control data leakage.
- Auditing: How does an enterprise tell from an audit perspective where its data is if the workload can be moved at any time? Many SoX auditors require the ability to tell where a workload resides. In terms of a grid, this becomes problematic. Maybe building it better audit mechanisms along with an XML markup that can tell all the places a particular process has touched would be useful.
Likewise, it does beg the quesiton of whether enterprises should continue to write applications in C++ and whether migrating all development to Java is better in the long run. It still does beg the question of enterprises who embrace scripting-oriented languages such as Ruby on Rails and how Sun sees these folks participating.
I also learned that the University of Tokyo has a 512 Core Processor which seems ripe to displace all the incumbent vendors who think that 16 core is meaningful. This project seems ripe to even displace startups such as Azul Systems who are at 24 cores. Hopefully, industry analysts will pay attention to the Grape chipset and its potential.
One of the conversations that I wished would have occured but didn't was the simple fact that the Mhz race is for the most part over. Nowadays, everyone is driving towards multi-core architectures which can be utilized by first understanding how to write multithreaded applications and secondarily, how to make tasks run in parallel.
The J2EE & .NET community have been telling us for so long that enterprise developers never needed to worry about the complexity of writing multithreaded applications and that the application server will handle everything. In hindsight, this feels like a trap as no J2EE vendor and their application server have the right architecture to scale across hundreds of cores.
Maybe this is an opportunity for industry analysts to stop talking about the need but to commission someone to tell enterprises how to write applications that scale in this regard. For analyst firms that provide coverage on grids, they can sometimes be impressed by the numbers of CPUs we can throw at a given problem-space but still are ignoring the fundamental question of how efficiently are we using them.
The biggest unanswered problem that attendees mentioned was licensing. Acquiring a software license on a grid is part of overhead. Minimally, this problem space has at least two components. First, licenses have locational barriers (can only be used in the US, a named site, etc) and second that there is no such thing as a license markup language in which a computer could interpret all of the characteristics of a license to make a decision at runtime. I would love to see the folks over at the OMG champion the creation of a license markup language. Maybe I will ping Richard Mark Soley, Peter Herzum or Phil Gilbert to see what it would take to get something started.
The more interesting thing that closed source vendors should pay attention to is if your enterprise customers are asking about grid computing and you aren't changing your licensing models to meet them, then they have one and only one choice and that is to go open source and displace you. Hopefully, folks from Oracle, BEA and IBM will step up and figure this out before your customers do...