Let’s face it, all but the largest enterprises would prefer to not to have any IT professionals on staff, or at least as few as possible. It’s nothing personal against geeks, it’s just that IT pros are expensive and when IT departments get too big and centralized they tend to become experts at saying, “No.” They block more progress than they enable. As a result, we’re going to see most of traditional IT administration and support functions outsourced to third-party consultants. This includes a wide range from huge multi-national consultancies to the one person consultancy who serves as the rented IT department for local SMBs. I’m also lumping in companies like IBM, HP, Amazon AWS, and Rackspace, who will rent out both data center capacity and IT professionals to help deploy, manage, and troubleshoot solutions. Many of the IT administrators and support professionals who currently work directly for corporations will transition to working for big vendors or consultancies in the future as companies switch to purchasing IT services on an as-needed basis in order to lower costs, get a higher level of expertise, and get 24/7/365 coverage.
2. Project managers
Most of the IT workers that survive and remain as employees in traditional companies will be project managers. They will not be part of a centralized IT department, but will be spread out in the various business units and departments. They will be business analysts who will help the company leaders and managers make good technology decisions. They will gather business requirements and communicate with stakeholders about the technology solutions they need, and will also be proactive in looking for new technologies that can transform the business. These project managers will also serve as the company’s point of contact with technology vendors and consultants. If you look closely, you can already see a lot of current IT managers morphing in this direction.
By far, the area where the largest number of IT jobs is going to move is into developer, programmer, and coder jobs. While IT used to be about managing and deploying hardware and software, it’s going to increasingly be about web-based applications that will be expected to work smoothly, be self-evident, and require very little training or intervention from tech support. The other piece of the pie will be mobile applications — both native apps and mobile web apps. As I wrote in my article, We’re entering the decade of the developer, the current changes in IT are “shifting more of the power in the tech industry away from those who deploy and support apps to those who build them.” This trend is already underway and it’s only going to accelerate over the next decade.
Please read the whole piece. Much of what is detailed in this piece is already well underway. Equipment has become smarter. Managed switches increasingly mesh themselves by default. Routers and ATM’s the same. Equipment manufacturers now have service arms to assist IT shops in deployment and long term maintenance. eMail services of all types are moving to the cloud in a land rush. The ability to contract out or build your own clouds means that there will be fewer needs for managing individual servers. They will be treated like disposable peas in the pod.
So is the future bleak? No. But it does ratchet up the competency level of IT staff. Twiddling a wrench will still be requried, but the chance one will use the tool bag on a regular basis for core staff will dwindle. More likely that staff member will spend their time using MS Project and Excel in program roll outs.
Its not uncommon to see the Register lay one down that is shall be say, lacking insight. Seven lessons from the HP Touchpad fire sale — is one such case.
The Tablet Effect is real. Really.
A price point can be identified for mass tablet adoption Pretty much the entire planet assumes that come Xmas tablets will be all over the place as `The item to buy`. The Suits figured that one out back in June. Earth to Reg. A price point can be found for ANY product. That has been a known quantity long before Adam Smith wrote Wealth of Nations. There are no revelations in those two ledes.
WebOS has – or had – a market, as do other operating systems. WebOS still has a market. However the OS needs to find a new champion or go FOSS. HP’s intent in the design was to be able to develop the equivalent of a browser for devices that could be utilized across all facets of their product suite. They accomplished it then are preparing to abandon it. That alone is a stupid move. It would have a market as a FOSS entry for many of the DIY Maker developments that crop up. Code to a single simple HTMLish front end.
Fail to plan, and plan to fail. Depends on how you look at it. From the Boardroom this was a disaster, that the Boardroom owns. But having found a price capable of selling the Pad, its was now a hit. They are even thinking of cranking up another run of the puppy. Failure? Only of confidence.
We are not individuals, particularly where the web is concerned. Oh please. There were billions of people who did not go for the deal. Nuf said.
Scalable deployment is even harder than clever people think
Information technology still has a long way to go These are really interlinked. Scalable architecture is not easy but its not rocket science. What is hard is FUNDED scalable architecture. The fact is the technies/boffins don’t get what they want. They get what the Suits think the project can support on a cost vs income matrix. The scene usually ends up at enough tech to support steady state usage plus 20-30% for spike. The IT resources are always bounded by what the Suits are willing to spend. One other consideration. The whole Pad rush was also bounded by the supply chain. Makes no difference in the world if you have the latest IT tech if your supply chain system, who might be someone else entirely is still on older tech. Your systems are then bounded by their limitations as well.
I should keep my trap shut, as I implicate myself in the observation. But it is long in the tooth to hear those that can’t pontificate on those that can. Course my ace card is that I have done the `can` for 30 years in IT, up and down the corporate food chain.
At the Office 365 launch, Gordon Frazer, managing director of Microsoft UK, gave the first admission that cloud data — regardless of where it is in the world — is not protected against the USA PATRIOT Act. . . . Frazer explained that, as Microsoft is a U.S.-headquartered company, it has to comply with local laws (the United States, as well as any other location where one of its subsidiary companies is based).
Though he said that “customers would be informed wherever possible”, he could not provide a guarantee that they would be informed — if a gagging order, injunction or U.S. National Security Letter permits it.
He said: “Microsoft cannot provide those guarantees. Neither can any other company“.
While it has been suspected for some time, this is the first time Microsoft, or any other company, has given this answer.
Or if not death, it will certainly move offshore. When the full impact is understood by companies of the exposure one may have to the Patriot Act in using third parties, cloud or otherwise, it will impact the overall situation. If its sensitive, no cloud. If the transaction is sensitive, no cloud. HIPPA for example has had some of the same impacts for not using the cloud. The law is so broad that one could not be sure that in using the cloud one is not infringing on the law itself. So caution has inhibited cloud use of medical data. Its the primary reason Google shut down their medical data efforts.
Consider this, right now there is a rush by many corps to move email to the cloud. Cloud based Exchange is a very popular item and is popping up all over the place. The reasons are simple, its a low value, high risk endeavor. Only bigger companies are willing to invest in the internal maintenance needed to keep it running. But, this just makes YOUR corporate email ripe for the taking by the FBI. Fact it makes it a cherry pick situation, and with the Patriot Act you will never know. The Act itself has gag provisions that would prevent a Microsoft from informing you under penalty of law.
Well I am sure you may have heard of blogger being down for 48 hours, painfully so if you are a Blogger user. Ann Althouse’s blog is still not up as I write this. Which does show the Achilles heel of Cloud Services. You don’t have any sort of control as to what happens to your content or the nature of the effort that will be applied to resolving your issue. –
A Blogger Service Disruption update contains four updates from the last 24 hours, starting with this one:
We have rolled back the maintenance release from last night and as a result, posts and comments from all users made after 7:37 am PDT on May 11, 2011 have been removed. Again, we apologize that this happened and our engineers are working hard to return Blogger to normal and restore your posts and comments.
That’s nearly 48 hours of downtime, and counting. Overnight updates promise “We’re making progress” and “We expect everything to be back to normal soon.”
My question is, “What if this had happened to another Google service?” Say, Google Docs? What if every document you wrote and saved on Wednesday was suddenly taken offline on Thursday, and you no longer had your presentation or your notes or your research for a client meeting today? How does this promise from Google sound now?
Your apps, documents, and settings are stored safely in the cloud. So even if you lose your computer, you can just log in to another Chromebook and get right back to work.
Please don’t wave your SLA like some Neville Chamberlain. When the chips are down its not whether your service provider will get back up, but whether your data will survive once systems are restored. The other factor being will your cloud provider survive the torrent of lawsuits that follow? When its a system wide outage and the majority of the customer base is on a SLA with applicable penalties this becomes a salient concern. When the margins are thin bankruptcy could be more than appealing to the vendor.
The fact is, like your mother told you at the tender age of 5, don’t put all your eggs in one basket. Insist that your employees who do use cloud services schedule backup cycles to your own in house hardware. Preferably weekly at a minimum. Look at providing part of the cloud like services yourself. It is quite easy for a company today to buy a server or two, site them in a colo center and manage them as team based file servers using a dropbox like interface. iFolder and RubyDrop come to mind on that score. Then schedule nightly diff backups to be dumped back to home base.
At the enterprise level Amazon’s AWS service now has Virtual Private Cloud so that a company can apply a belt and suspenders approach. Home based services for handling the base load requirements of the company yet utilize intermittent compute cycles for the peaks throughout the day as demands require it. This type of split approach has more going for it than many might expect. If you run the numbers on the AWS estimator it becomes quite evident that in a rental vs buy comparison, buy is the cheaper solution if the compute platform has to be running 24×7.
So where do outages like Blogger lead us? If you are the CIO then heed the EF Hutton commercial — You earn it. Depending on cloud services never did really change the responsibility matrix all that much, SLA or no. Your right to sit in that corner office is still dependent on providing competitive reliable services to the corporation. But the Blogger incident does bring one thing to mind. If you had to pull in all the data sets out of the Cloud back to home base — could you house it all? You might want to pose that question to your capacity planning manager.
Facebook has garnered a lot of free press on their Open Compute Project
which they should be commended for. We’re all for openness here at ThirdPipe. But there is some hum bug as well –
The Open Compute Project is huge, don’t get me wrong. By releasing the specifications and mechanical designs for the servers and data center in Prineville, Facebook has in one fell swoop set an incredibly high bar for those who would want to make their own datacenters.
Datacenter and hardware vendors are going to love hearing “hey, why can’t we get our specs to be like Facebook’s?” And, given the efficiency ratings and green aspects of the Open Compute specs, a lot of people are going to be clamoring for this design.
One big green aspect is the power usage effectiveness (PUE) stat, which measures the amount of power that gets from the outside power grid to the motherboard in each server. Facebook is reporting an initial PUE of 1.07–which translates into 93 percent of grid power getting to the motherboards. The industry average PUE, the Open Compute Project reports, is something like 1.5.
Yeah, that’s nice.
Looking at the specs, it’s very easy to see how disruptive this is going to be. Facebook opted for commodity parts, building everything from the ground up. In fact, Facebook made a point out of mentioning that branding was specifically removed from the servers’ chassis. They even went without screws, to save time for servicing and weight of the servers.
The article goes down several avenues all reasonable. But I think it misses a few key points –
This will be a site to watch. With 20k heads on a problem, problems end up small. The real issue is will the OCP be willfully bound to the current bounds of deisgn or will they branch out and force vendors to travel a different path. Even more so will Intel be watching?
Hollywood has made tons of money from DVD sales and had hoped that Blue Ray would extend the life of the physical media market. While DVD’s won’t go away anytime soon, and Blue Ray is selling in respectable numbers, the market for movies in boxes is shrinking. Online streams are becoming the favored method of consumption. While streaming rights have become a lucrative channel for Hollywood, the revenue per view is far below that of individual movie sales. Rather than celebrating the growth of a new revenue source, the studios see this as cannibalization. In the real world, cannibals don’t bother with trying to eat mummies.
Hollywood wants movies sold as downloads to replace its physical media sales. There are multiple problems with that concept. The biggest one being that downloads are still expensive and the DRM makes them less universally usable than old physical media. Another big stumbling block is cost. Few consumers are willing to pay $10+ for a DRM crippled download. Somehow and online storage locker is seen as a fix for these problems – even though it doesn’t really address them. Amazon was the first major retailer to roll out the locker service, but it won’t be the last. Apple and even some of the studios themselves plan to offer a similar service.
As a concept, a cloud disk you can play media from is probably something consumers will like. That is if they control the content stored there. It’s more likely Hollywood, Big Music, and the retailers themselves will insist on control of the content. Add to that the high cost of “owning” rather than renting, and we have another great idea crippled by old media control freaks. While new technology can empower an enlightened business the freedom to exponentially grow its market, it can also provide a doomed business model to a way to self destruct a little faster. Unless they can control the content, I think consumers will stick to the streams. Lockers as they are today will go the way of the VHS tape before they even get started.