See also Dave's post on The Meaning of Rigor in Project Management
If I was the house and I was setting the over-under, I'd say 10 & 6 respectively.
Yikes: James Snell: We (IBM) probably have at least that many already in production within our firewall with more on the way.
I'm glad this is make believe over-under.
To clarify, I meant general purpose - integration focused Atompub servers. I predict that Atompub will result in 10 OSS and 6 proprietary general purpose - integration focused - Atompub servers. I'll go ahead and set the "Atompub built into your existing commercial product/service (e.g., Lotus Connections)" at 350.
He said: Messaging is a big deal and having something run over Atompub would justify an RFC.
+1 to that
Otherwise a lot of people are more or less going to reinvent it anyway.
More and more I'm seeing HTTP as the preferred protocol and protocol gateways to messaging. As we are using a bit of messaging and a bit of Atompub, it would be nice to have an RFC that would define how to do this one way. It might not be necessary (or achievable), but it could also be useful to include how to go from messaging to Atompub (i.e., the messaging side). I mean specify the "server-side gateway" & the "client-side gateway".
I ran into so many bizarre problems with obscure web containers that I was forced to do all sorts of HTTP traces, repeatedly read the specs, etc. A memorable problem was a version of Chilisoft truncating SiteMinder's ginormous SMSESSION cookie. Identifying that defect took some work. Sun was awesome to deal with though - I remember them getting us a patch in 2 days.
Anyway, its an older book, but I'm reading HTTP: The Definitive Guide. I am surprised at how much I still don't know about HTTP. Anyway, I really like the book - it makes me like HTTP even more.
The whole naming thing is a little silly (e.g., Web 2.0, Enterprise 2.0), but I suppose it does have its place in the communication of ideas.
Anyway, here is the list:
It might not fit - or might be too much of "the how" rather than "the what", but something about open standards and open code on the list might be appropriate.
The very worst example of the framework trend is seen in the decision to purchase a mammoth framework offering that provides everything in one box as an “integrated solution”. A huge stack that gets connected into everything and exerts massive gravity on our architecture. Everything becomes an exercise in warping aspects of our system to fit with this stack and the assumptions of its creators. Essentially we’ve bought “architecture in a box”.
The older I get, the simpler I like things. I don't have anything against frameworks, but they often become massive technical debt by themselves.
Einstein is over quoted, but more and more my views on software architecture are centered on Make everything as simple as possible, but not simpler.
This means writing the tiniest amount of code you can. For instance, use HTTP clients, use JMS clients, write a security impl., perhaps a simple feed reader. Deploy some really good web architecture. Some good monitoring foo. And then build in unit, integration, and acceptance testing from the beginning. And yer done. At least if you want to take advantage of all the engineering that already went into the web - which at this point is kindove lot.
It is darn-tootn' good. And you can just export your feeds from Bloglines and import into Google Reader. 2 minutes, done.
Thanks for the good year (or so) Bloglines - you were very good to me . . .
Ron Schmelzer of ZapThink
So what does "Proposed Standard" mean? That might not sound very stable - "Proposed" - but that term does have special meaning in the IETF. The best way to explain it is that there are plenty of other protocols that you use every day that are "Proposed Standards", for example: WebDAV, TLS, LDAP, SMTP, SIP, and IMAP. Of course, if you want to know more about the IETF's standards process, they have it documented, in RFC 2026
Joe Gregorio talks (now at Google) about GData and how says: This is great news for us because the Google Data APIs are built on the Atom Publishing Protocol. The current Google Data APIs are based on early versions of the AtomPub specification and now that AtomPub is a Proposed Standard work will begin on getting all of our Google Data APIs compliant with RFC 5023.
I spent a little time today thinking through a naming standard for our integration architecture. We need to support Push & Pull. This is essentially REST/AtomPub & Messaging.
For REST, I liked what RESTful Web Services had to say:
. . . So which is it? URI as UI, or URI opacity? For once in this book I'm going to give you the cop-out answer: it depends. It depends on which is worse,for your clients: a URI that has no visible relationship to the resource it names, or a URI that breaks when its resource state changes. I almost always come down on the side of URI as UI, but that's just my opinion.
URI Templates seem like a natural fit for specifying URI standards & URIs that are in use.
What about on the Push side (i.e., traditional MOM/XMPP) - what is the analogue? Is there one?
I toyed with the idea of using the same verbs (GET & POST at least), but I think that the semantics are different enough with messaging that it is either too clever or just lame to use a subset of the same verbs. The main issue is that GET isn't idempotent in messaging. I guess you could use GET for Browse, DELETE to pop off a queue, and POST to send a message, but it just seems too contrived to me. These are different styles - why break people's head with foolish consistency?
In the past I have had good luck with NOUN.VERB in messaging which is similar to REST in a way if you standardize on it.
Other ideas are NOUN.INPUT, NOUN.OUTPUT; NOUN.REQUEST, NOUN.RESPONSE.
Any other ideas / experiences?
Good paper(s) too:
REST is clearly not the "end of history" with regards to network-based software architecture. At smaller scales, with somewhat different desirable properties, other styles, such as Remote Data Access, or Event Based Integration, continue to solve important organizational and technological challenges. Beyond the current iteration of the Web, future requirements will require a re-evaluation of REST's constraints and desirable properties for a new set of requirements. This may lead to relaxing some of REST's constraints, or an introduction of new constraints (Extending REST for Decentralized Systems). Regardless of the direction this may take, a continued means to successful future systems architecture will be the discipline of objectively evaluating constraints, and the properties they induce, for the new generation of global scale, networked-based software systems.
Via Stefan Tilkov
Update fixed broken link to presentation. Also added this image as it caught my eye:
I distinctly remember the "Object Web" phase of distributed computing. I remember being a participant in it. I bought into CORBA. I remember saying things in 1998 like, "we need more - we can't be constrained to HTTP GET and POST in the browser. We need to be able to bust a socket and talk to an ORB for a richer experience" I was kinda clueless then. At least I had a lot of company - everybody including Lotus Domino which I was integrating with when I made that comment was busily adding CORBA capabilities to there whos-its and whats-its. Well everybody besides MSFT and friends, but that is a different story.
Almost 10 years later, we still just have GET and POST in the browser, but XHTML 5 will eventually fix that. It turns out you can get pretty far with just GET and POST. You can get even farther by just discarding distributed object / RPC technology all together and using the REST uniform interface.
It will be great to meet Stefan Tilkov, Steve Vinoski, Pete Lacey, and Dan Diephouse in person. We'll no doubt have a lot to discuss.
Let me try to explain what I’m talking about. You want an API without having to write a line of code? It’s called curl, and it ships with your MacBook. And just look at how simple the APIs are in your favorite language.
The wins go deeper than APIs, though. Think about what it would take to add load-balancing to CouchDB. I’ll give you a hint: perlbal. Or what about adding a transparent caching layer? Try Squid.
HTTP is the lingua franca of our age; if you speak HTTP, it opens up all sorts of doors. There’s something almost subversive about CouchDB; it’s completely language-, platform-, and OS-agnostic.
Look, CouchDB may succeeded, and it may fail; who knows. I’m sure of one thing, though — this is what the software of the future looks like.
Patrick and I were talking about this a bit yesterday. It is tough for some in the middleware world to get this. It will come with time. It is a bit nuts that your integration architecture really is just the judicious use of HTTP, well designed URIs, and data formats. How that architecture is realized (CouchDB, Squid, perlbal, etc. etc.) really won't matter over time.
If you can't answer yes, yes, and yes to: completely language-, platform-, and OS-agnostic when it comes to your integration architecture, you are doing something very wrong.
For example, one of the Web guys commented that they did not want to have to buy the whole set (i.e. complete product) when all they really needed was a paring knife, yet the only choice they are offered is to buy the whole set.
. . .
"Bloatware" is definitely part of the issue. Ironically so are many of the things a lot of us have worked to define, create, and promote during the past two or three decades around guarantees of atomicity and isolation. None of the Web companies use distributed two-phase commit, for example. They only use it to deque an element and update a database in the same local transaction. So much for all that work on WS-TX! ;-)
. . .
It will be very interesting to observe over the next few years the extent to which the ideas and techniques in the custom built solutions become more widely adopted and incorporated into commercial products. One of the inevitable questions, as raised during the discussions, is how broad the market is for such things as Google's file system and big table, or Amazon's S3 and Dynamo.
You have to look at the open source movement as a global phenomenon. My day literally starts with a call from Moscow and ends with a call from Beijing. From that perspective, geography is less important in this incredible global collaboration over the Internet.
Having said that, I'd like to look specifically at Oregon and where I believe Oregon provides a leading role...starting with Linus...I think it's important to Oregon that Linus lives there. I think that is a real indication of the importance of Portland as a place where real leaders in the open source community reside and do their work.
. . .
There is opportunity for places like Portland because of this fact, that it is a global phenomenon, to really attract top people to the area, either to be close to their peer group, or just based on the fact that it's a more affordable and superior standard of living.
I see more small - mid sized software companies in Portland's future. Also small satellite development centers for mid-large companies as the world continues to flatten. I don't see many major companies moving here or anything like that. Portland has a decent amount of talent in open source & software more broadly.
The biggest thing Portland really has going for it is people (especially creative people) will typically move here in a heart beat as it is such a great / unique place to live.
I don't see Portland ever being a Tier 1 city or major tech hub. But clearly over time (20 years or so) SF, Portland, and Seattle will continue to morph together. I see a very bright future for software in Portland.
Summary At Agile2007 we heard the tale of a distributed Scrum project with 50 people in 4 continents. BMC Identity Management decided to build their next generation product, including architectural changes and component integration, using Scrum to handle the uncertainty of their product's requirements.
The problem is caused by the root culture of IT — project-driven funding models, a cobbler’s kids perspective on investing in infrastructure that helps IT (rather than a particular project), and a propensity to never decommission applications. IT systems have grown organically for the last 40 years. They’re a mess. It requires a fundamental change in the way IT operates as a service provider within the organization.
Here is the post on the Yahoo service-oriented-architecture list.
She is of course correct.
And all the while the technical debt is piling up. Maintenance is accounting for more and more of the IT budget. Soon there will be nothing left for new development.
There are of course solutions to this problem, but they require a lot of discipline and vision.
I think the discipline and vision are much less costly & more effective than the annual madness of budgeting in your typical large company.
The first step in moving from forecast-driven projects to feedback-driven... is to change the measurements. The book "Rebirth of American Industry" by William Waddell and Norman Bodek makes a good case that the measurements imposed by traditional cost-accounting methods are the biggest impediment to the successful implementation of lean manufacturing. Similarly, I believe that the measurements imposed by traditional project management methods are the biggest impediment to the successful implementation of lean development. In particular, instead of measuring variation from plan, we need to start measuring the delivery of realized business value.
Eric Newcomer comment: I tend to think of the IT world as divided between systems designed before the Web, and those designed to include it. The mindsets are very different, as you point out, but I would add that the pre-Web mindset is kind of driven by mainframe centric designs - I like to think that the issues with existing middleware (and perhaps binary languages) is due to the fact we always felt we had to design in all the features/functions of mainframe based systems in order to entice enterprises to move those apps to standards based systems.
There is a lot of truth to this. I live with a lot of big iron. Interestingly, AtomPub & the whole pull based feed approach to integration is somewhat similar to batch processing. Perhaps it can help bridge that gap a little bit? Hey, that would be cool - Atom feeds from VSAM files & AtomPub from CICS. Maybe, but it is the batch cycle itself that has a strangle hold. Batch cycles that have been nurtured for 30+ years are very difficult to untangle. Things are really bound up there.
The Enterprise Cool URI will save us yet.
Dan Hatfield:Honestly, I see the ESB as primarily a political thing. It allows for a greater degree of control on delivered solutions. In large companies, we don’t often do architecture - we do politecture…The politics drive the architecture. Not the way it should be…but that’s the way it is.
Politecture! Ouch. Sad but true. I walk this line on a regular basis. I push the envelope as far as it can go, but politics are ever present.
More Vinoski: Another non-technical way to look at it is from the viewpoint of Clayton Christensen’s classic book, The Innovator’s Dilemma. For quite a few years now, we’ve seen a series of sustaining innovations in the “object/service RPC” line of descent originally popularized by CORBA and COM, both of which built on earlier RPC, distributed object, and TP monitor technologies. RMI, EJB, SOAP, WS-*, and ESB are all offspring in that line, and there are surely more to come. I feel that REST, on the other hand, fits the definition of a disruptive innovation perfectly (and if you’re too lazy to read the book, then please at least follow the link, otherwise you won’t understand this at all). The proponents of the sustaining technologies look at REST and say, “well it can’t solve this and it can’t solve that” and voice numerous other complaints about it, precisely as Christensen predicts they would. But Chistensen also explains why, at the end of the day, any real or perceived technical shortcomings simply don’t matter (and in this case, they’re mostly perceived, not real). HTTP-based REST approaches have a lower barrier to entry and are less complex than anything the sustaining technologies have to offer, and REST is disrupting them, whether all the smart folks pushing ESBs like it or not. It’s not a technical issue, and there’s no amount of technology the non-REST tribe can throw at it to stop it because it’s based on how markets work, not on the technical specifics.
But when the next big thing comes along (and I love this business, because I know it will) you won’t have to rely on the professional noticers to tell you because it’ll touch your life directly.
I love this business too.
This internet-machine is wicked cool.
Good stuff is being discussed. Cool. The ESB Question - Steve Vinoski of former IONA fame.
More insight from Patrick: Properly Striking a Balance Between Shared Agreement and Decentralized Execution
There is plenty more, but you can find the rest pretty easily.
Ed (co-worker) and I messed with it a bit a couple weeks ago. It is approachable for newbies in that it is just code. I haven't really gotten that excited about the whole DSL thing, but an Enterprise Integration Patterns DSL sounds nice.
When Ed and I messed with it, it seemed young, but headed in a good direction. There is a lot to be said for simple - just download a jar file or 2 vs. adopting a whole platform. I know for a fact that there are a lot of people out there that would like to stop writing this type of code, but either can't or don't want to adopt a whole platform to do it.
It loks like the Camel docs might be getting rev'd a bit. I particularly liked seeing links to unit tests in the docs.
Good docs rock. Matt Asay says it results in more sales for open products.