Intel V-Pro – Next Gen. Remote Computing Or Hackers Party?


Intel VproRemote system access has been around since the time of Microsoft’s NetMeeting and PC Anywhere. These are software which allow the IT staff to take over a system’s mouse and keyboard across a LAN or Internet connection and operate on the system as though they were right there in front of the machine itself, seeing what’s on the user’s screen. Countless problems have been resolved this way. Still, this approach does have one major downfall. If the user’s OS is corrupted or crashed, then the remote connection does not work. Enter Intel vPro technology.

Targeted at businesses and not at consumers for now, Intel vPro technology is a set of technologies built into the hardware of the laptop or desktop PC with focus on three areas – e-Discovery and investigations, data protection and loss prevention and automatic system health and updates.

A PC with vPro includes Intel AMT, Intel Virtualization Technology (Intel VT), Intel Trusted Execution Technology (Intel TXT), a gigabit network connection with a minimal Core 2 Duo or Quad processors or Centrino 2 processors. Intel AMT is a set of remote management and security features designed into the PC’s hardware and which allow a sys-admin with AMT security privileges to access system information and perform specific remote operations on the PC. These operations include remote power up/down (via wake on LAN), remote / redirected boot (viaintegrated device electronics redirect, or IDE-R), console redirection (via serial over LAN), and other remote management and security features. In essence, vPro allows IT technicians to protect, maintain, and manage notebook and desktop PCs, even if the PC’s power is off, its OS is unresponsive, hardware (such as a hard drive) has failed or software agents are missing.What Is Intel VPRO

This “embedded” technology ensures that IT administrators can quickly identify and contain more security threats, remotely maintain PCs virtually anytime, take more accurate hardware/software inventories, quickly resolve more software and OS problems down-the-wire, and accurately diagnose hardware problems, all without leaving the service center. This allows the businesses to save millions through increased productivity and reduction of administrative overheads and associated costs.

Intel claims that because the vPro security technologies are designed into system hardware instead of software, they are less vulnerable to hackers, computer viruses, computer worms, and other threats that typically affect an OS or software applications installed at the OS level (such as virus scan, antispyware, inventory, and other security or management applications). For example, during deployment of vPro PCs, security credentials, keys, and other critical information are stored in protected memory (not on the hard disk drive), and erased when no longer needed. VPro even allows a PC user to press a few keystrokes, even in the midst a total operating system crash when not even the mouse pointer is responding. This sends a dispatch to IT indicating the user needs help. Interestingly, this also shows that the motherboard is monitoring all keystrokes all the time. But is that all what vPro is doing?

Such ‘Trusted’ computing technology raises many potential security concerns for users especially the fact that there is apparently no way to disable vPro on a PC along with the fact that most users cannot detect outside access to their PC via the vPro hardware based technology.

How Intel Vpro Works

This combined with the fact that vPro operates on the main system bus via the Q45 chipset (enables Remote Alerts, secured access in Microsoft NAP environments, Access Monitor, Fast Call for Help, and Remote Scheduled Maintenance) and on the CPU via Core 2, it theoretically provides access to all hardware including memory, the CPU to special software and compute abilities and communications which allows it to send and receive behind the scenes. This means that a remote user could theoretically gain access to the entire system covertly through vPro and then it’s just a matter of snooping through memory and hard drive files until whatever they’re looking for is found and transmitted using the Gigabit ethernet connection through which even 16 GB of RAM content could be transmitted in about two minutes. It’s worth mentioning here that disgruntled and ex-employees usually pose the biggest security threats to enterprises.

Intel doesn’t release details but if the vPro snoop software was built on AI or is at least smart, it could also send the typically used 800 MB or so of OS RAM and program data in under 10 seconds along with other data. This is the area of memory which contains the cipher keys and encrypted data, and information about paged data which could then be retrieved from the hard disk. All of this can theoretically happens remotely and covertly without the typical user ever knowing anything about it.

Though the claims of the industry are that it’s a secure platform, pretty much as anyone in security arena recognizes, any bit of “secure” computing is only secure for a limited period of time. Eventually, the security is cracked. It has happened with vPro technologies in January of last year when security researchers from Invisible Things Lab created a software that ‘compromised the integrity’ of software loaded using Intel’s vPro Trusted Execution Technology. TXT is supposed to help protect software e.g. a program running within a virtual machine from being seen or tampered with by other programs on the machine. The researchers said they created a two-stage attack, with the first stage exploiting a bug in Intel’s system software and second exploiting a design flaw in the TXT technology itself.

It’s a question worth asking that when something so powerful is made possible through this technology, will really go unexploited by the black-hats and those that crave for power. Intel vPro still has a long way to go before it can win trust.

What To Expect In Pakistan’s Technology Sector in 2010


Published Dawn, Sci-Tech, January 3rd, 2010

With growth expected to return to the global IT industry in 2010 with 3.2% expected increase for the year, returning the industry to 2008 spending levels of about $1.5 trillion (Source: Gartner), BRIC countries growing 8–13% and Pakistan’s GDP crossing the US $160 Billion mark, technology industry will do well for 2010.

The upcoming year appears poised to build on the strength of trends already in place: greater mobility, greener technologies, mobile technologies, more powerful hardware and web-enabled products and applications that focus on collaboration and interoperability. Here’s what we think is in store.

Hardware Gets Smaller, More Powerful and Greener

This is a no brainer. Intel Pakistan has announced that its new 32nm architecture codenamed Sandy Bridge will arrive in 2010. It will succeed the 45nm Nehalem architecture and will have up to eight cores on the same die, 512KB L2 cache and 16MB L3 cache. Also new will be the addition of Instruction AVX (Advanced Vector Extensions) which might be as significant as the introduction of SSE in 1999. To complement this Intel will also introduce the new Clarkdale family across the mid-range segment. With clock frequencies from 3.2GHz up to 3.46GHz, It will be Intel’s first 32nm processor and will grab the baton from the Core 2 Duo/Core 2 Quad series. This will bring a revolution in gaming, applications, HD & multimedia and at a price that is really sweet.

Online Reaches Critical Mass

Pakistan is among the five dynamic economies of developing Asia in terms of increased penetration of mobile phones, internet and broadband says the Information Economy Report, 2009 published by the United Nations Conference on Trade and Development (UNCTAD). In the area of internet penetration, Pakistan is placed at the third position and for broadband penetration the country is at the fourth position in Asia. With Wimax taking off and providers such as Wateen already boasting of 100,000 connections, we can safely predict that internet in Pakistan will reach critical mass this year (up from its current 11.6% penetration) and move from being a niche channel to figure more prominently in our lives.

The Year Of The Mobile: m-Commerce, Mobile Web and Micro-lending

Expect the mobile phone to further its hold over our lives. 2010 will see it being used for micro-lending, micro-payments, reporting violence and human rights abuses and crowd-sourcing crisis information.

It will also become the default charity tool. For a while now, we’ve been able to leverage the immediacy of being able to donate instantly to a cause through SMS text to give campaigns. Expect NGOs to further improve these platforms in 2010, allowing you to donate instantly.

The mobile Web is also starting to emerge in Pakistan as a low-cost way to deliver simple mobile applications to a range of devices. Expect more financial institutions to take initiatives in this field and more consumer oriented ventures such as music platforms to be announced this year.

The next big thing in mobile however will be location based social networks (marriage of mobile and social networks) and real time web – also known as cloud computing. We’re expecting some company to announce a venture in this field this year.

Enterprise Computing: Green IT & Sustainable Computing

Rising energy costs, the rise of the carbon credits market and pressure from the Copenhagen Climate Change Conference will make sustainability a source of opportunity for the Pakistani IT industry in 2010 locally and globally. We predict that new IT companies dealing with Carbon Management Software will be setup and existing enterprise software vendors will announce forays into the field. This market stands to become bigger than the global financial software market, so it’s impossible to think firms will not take advantage of this.

Intel Core i5 750 – First Look


Intel Core i5 LogoIntel took a big leap forward in the design department when it launched Core i7 900-series processors last year. Just a few of these improvements included a new triple-channel memory controller integrated into the chip, a new QuickPath Interconnect system to replace (and improve upon) the front-side bus architecture of old and the return of hyperthreading that split the chip’s four physical cores into eight virtual cores for increased system performance.

The Core i7 900-series chips were based on a new Intel X58 chipset and LGA1366 socket, therefore aspiring enthusiasts had to invest in new motherboards to reap the benefits of the Core i7 900-series platform. This rig was also expensive, so Intel recently launched a more mainstream processor – the Core i5.

The Core i5

The new Intel Core i5 750 is the first release in a series of processors based on a mainstream version of the Core i7 platform. It is a quad-core part based on the “Lynnfield” architecture, fabricated using a 45nm process ( Intel’s newest processor architecture known as Nehalem) and utilizes the new LGA1156 platform (note: Different from the Core i7’s LGA 1366). The Core i5 750 CPU is set to cost around the Rs. 16,000 mark and will operate at a 2.66GHz speed. It will feature a whopping 8MB L3 cache, but no Hyper-Threading support will be present.

Like the i7, the Core i5 CPU also run on Intel’s latest P55 chipset, which necessitates a new motherboard purchase for use. What’s changed, however, is that the Core i5 CPUs has adopted a different permutations of the fanciest of the Core i7 900-series’ features.

Core i5

What has been dropped

To make it more economical Intel has removed the QuickPath Interconnect and triple-channel memory controller and replaced it with a Direct Media Interface (DMI) and dual-channel memory controller. The difference is that QPI is like hyper transport with bandwidth of 25.6GB/s. It is the new “front side bus” being a direct link from the CPU(s) to the north bridge. DMI on the other hand is a connection between the north bridge and the south bridge with bandwidth of 2-4 GB/s. Does it matter? Not much. Most software don’t require such heavy power just yet offered by QPI and given the minute performance differences between current dual- and triple-channel memory configurations this is not much of a loss. This is however bad for future proofing. If you were to go out, and buy an Core i5 rig right now, a year down the road, when prices drop and you’d like to purchase the i7, you’ll have to buy another motherboard and new ram from scratch. It is not designed with the upgrade consumer in mind. But even remaining on the same platform means plenty of options such as future offerings including the 32nm Clarkdale Core i5 processors that will have a thermal design power of just 73 watts, 23% less than that of the 45nm Lynnfield architecture. Also meant to use the same platform are the Core i3 series and let’s not forget the Core i7 800 series.

Secondly, an integrated PCI Express graphics controller on this Lynnfield CPUs can either deliver 16 lanes of bandwidth to a single PCI Express 2.0 videocard or split this connection into two x8 lanes for an SLI or CrossFire setup. Although it’s a cut from the full 32 lanes (for a dual 16x or quad-8x configuration) provided by Core i7’s X58 chipset, the bandwidth reduction should only affect those who SLI or CrossFire dual-GPU videocards.

Third, like we mentioned earlier, the core i5 has no hyper-threading. While Core i7 is a quad-core, it appears in Windows as having eight cores. This further improves performance when using programs that make good use of multi-threading. Core i5 products, however, will not have this feature, which means operating systems will recognize the processors as having four core and no more. This will have no affect on the performance of most applications, like web browsers and even games, but it will be a blow to those who use 3D rendering software and other such programs that excel with multi-threading.
Performance

For the most part, the Core i5’s internal workings are identical to existing Core i7 processor and offsetting the superficially dumbed down feature set is a more aggressive implementation of Intel’s auto-overclocking feature known as Turbo Boost. Whilst the Core i7 900-series CPUs will only increase their multipliers to a maximum of two additional steps according to system demands (effectively taking a 3.33-GHz processor to 3.6-GHz depending on how many cores are in use), the new Lynnfield Core i5 750 processors are able to jump up four multiplier steps (2.66-GHz to a maximum 3.2-GHz) with Turbo Boost enabled. With over-clocking you can easily expect to hit the 3.6 GHz mark and even up to 4.3 GHz if you know how to. This chip has a lot of room to spare.

Our Test

Instead of using a high-end system, we decided to put the Intel Core i5 750 to the test using a real-world system that mostly anyone can afford and running just a gaming test for lack of other options.

System Configuration:

Manufacturer: Intel
Family: Intel(R) Core(TM) i5 CPU 750 @ 2.67GHz
Architecture: 64-bit
MultiCore: 4 Processor Cores
Capabilities: MMX, CMov, RDTSC, SSE, SSE2, SSE3, PAE, NX, SSSE3, SSE4.1, SSE4.2
Cache
Level 3, 8 MB
Level 2, 256 KB
Level 1, 32 KB

Graphics Card: 1GB PCIe NVIDIA GeForce 9800 GT (Microsoft Corporation – WDDM v1.1)
DirectX Info: Version 10.1
RAM: 2 GB DDR2

Test Results

Core i5 Specs

Benchmark Results

3D Mark Advantage Specs Core i5

Checking the scores online shows that the Core i5 750’s score of 12624 falls right around the scores set by competing PCs that use Core i7 920 processors and is better than the scores set by the Core 2 Duos and most of the Core 2 Quads of the world.

CPU Test 1 Score: 1794.93 Plans / sec

AI: The AI test features a high-intensity workload of co-operative maneuvering and path-finding artificial intelligence calculations. The test setting is an airplane race course crowded with planes, all attempting to navigate through a series of gates while avoiding collisions with each other and the ground. The test load consists of the movement planning for each airplane. The workload is entirely parallelized, and can utilize multi-core CPUs to the fullest. Faster CPUs will be able to compute more frequent and timely movement plans for the airplanes, resulting in smarter flight routes.

The CPU tests run at a fixed resolution of 1280×1024, and most of the graphics options are drastically reduced. There are almost no post-processing effects, no complex shaders, no shadows, and none of the world outside what you see on screen is modeled. The idea is to limit the impact of the GPU so much that even budget, entry-level cards can display the tests so easily that they’re entirely CPU-limited.

The i5 blew past this test with flying colors better than a 3.0GHz Core 2 Extreme 9650 quad-core CPU would perform (score: 1678).

CPU Test 2 Score: 15.52 Steps /s

Physics: The Physics Test features a heavy workload of future generation game physics computations. The scene is set at an air race, but with an unfortunately dangerous configuration of gates. Planes trailing smoke collide with various cloth and soft-body obstacles, each other, and the ground. The smoke spreads, and reacts to the planes passing through it.

The test spawns one pair of gates for each CPU core. So, four gates in a quad-core CPU. If there’s a hardware physics card in the system, subtract one from that number and then add four (seven gates in a quad-core system). Each pair of gates is its own independent physically simulated “world” and does not interact with the other pairs of gates.

Since we didn’t have a PhysX card, the system performed at normal levels expected for the configuration.

Our Evaluation

Gaming

The tests of Core i5 indicate that its gaming performance will match or is better than that of the Core i7 920. This, more than anything, is likely due to the Lynnfields’  improving on the Turbo Boost feature. However, if you already own a high-end Core 2 Duo or Quad, upgrading only on the basis of gaming performance isn’t the best idea. If you are in the market for a new one, definitely buy the Core i5.

Power

We couldn’t test this feature ourselves, so we’ll take Intel’s word for it. Intel has been going to great lengths to ensure their processors use as little power as possible. Core i5 is no exception. The new power management feature throttles down the cores automatically when they aren’t being used. This, along with a general refinement of the manufacturing process has resulted in a processor that just sips at power. It is our guess that a Core i5 system, even when paired with a high-end graphics card, will idle at under 100 watts – for the entire system. This is an impressive achievement.

Overall:
The Core i5 750 looks to be a solid winner. Its true strength lies in the Turbo Boost Technology. With it, the processor can automatically overclock all four of its cores independently to match the workload at hand. Down-clocking works equally as well thanks to new power saving features. The only thing it is lacking compared to the other Lynnfield processors is hyper-threading.

This system is highly recommended for those looking to dip their toes into the Nehalem platform without breaking the bank. The Core 2 Duo and Core 2 Quad parts will eventually die out, putting an end to the LGA775 platform, so it only makes sense now to buy  this far superior system than invest a new in an old one.

Cheat Sheet:

If you’re as confused as a whole lot of us with all this information over-load, here’s a cheat sheet for use to compare different Intel’s offerings. (source: PC World)
Intel Lynnfield Chips

Beyond The Core – Intel Roadmap 2010


Ashar H. Zaidi, Country Manager, Intel Pakistan recently shared Intel’s Vision for 2010. One of the more interesting things shared was a roadmap of Intel’s Tick Tock development model until 2012. Each tock is the introduction of a new architecture while each tick is the introduction of a smaller production process. Currently Intel is introducing the 45nm Nehalem “tock” and in 2010 you can expect a 32nm shrink of Nehalem

Intel Tick Tock Model codenamed Westmere.

A new architecture will also arrive in 2010, that tock will introduce the 32nm Sandy Bridge. Sandy bridge is the 32nm architecture will succeed the 45nm Nehalem architecture in 2010. Sandy Bridge (formerly also known as Gesher) will have up to eight cores on the same die, 512KB L2 cache and 16MB L3 cache. Also new will be the addition of Instruction AVX (Advanced Vector Extensions) which might be as significant as the introduction of SSE in 1999. According to Intel the introduction of AVX will enhance the performance of certain matrix multiplication instructions by 90 percent.

Even though Asher didn’t go into further architectures, the next actually after that will be the introduction of a 22nm shrink of Sandy Bridge. Most of you will probably already have heard about these upcoming processors, but if you haven’t, than know that in 2011 you can expect the 22nm Ivy Bridge and one year later you can expect the new 22nm Haswell architecture. The 22nm architecture is expected to replace the Sandy Bridge architecture in 2012. This architecture is probably still four years away from us in Pakistan but early information tells us that this processor architecture will have a native eight-core design, a whole new cache architecture, “revolutionary” energy saving technologies, the FMA (Fused Multiply-Add) instruction set and possibly on-package vector co-processors.

Asher also talked about the chip giant’s plans for the Value, Mid-range, Performance and Extreme segments. Already in the works is Intel’s Lynnfield (LGA1156) platform will start out with a trio of processors, two Core i7-8xx models and one Core i5-7xx model (i5-750 review coming up next). However, by 2010 Intel will introduce the new Clarkdale family across the mid-range segment. With clock frequencies from 3.2GHz up to 3.46GHz. It will be Intel’s first 32nm processors and grab the relay baton from the Core 2 Duo/Core 2 Quad series.

Intel Client Roadmap 2010

It is expected that in 2010, Intel will also announce the six-core Gulftown processor that is listed after Core i7-Extreme in this presentation. Rumor have suggested that Intel will make this processor the Core i9 series. Asher said to keep tuned for a January announcement.

Intel Roadmap 2010 - WestmereAsher talked a great deal about the upcoming Westmere. Like Nehalem, Westmere will support Intel technologies incorporated into Nehalem like Hyper-Threading, Intel Turbo Boost, and an integrated memory controller. When it launches, two Westmere-based cores will be offered: Clarkdale for desktops (mainstream/ value segments), and Arrandale for notebooks (mainstream/ value segments).

Both Clarkdale and Arrandale will sport two processing cores with Hyper-Threading, bringing support for up to four threads to run simultaneously, and they’ll also be the first Intel CPUs to feature integrated graphics on the CPU package (although it won’t be on the same piece of silicon as the CPU die). Intel also says both CPUs will support dual-channel DDR3, with 4MB cache. In another first, the new processors will also support Intel’s new AES instructions: these are 7 new instructions focused on delivering accelerated encryption/decryption. This should reap benefits for users concerned about data security who would like to encrypt their hard drive.

The performance benefits for these chips will largely come from the improved bandwidth and reduced latency Intel obviously reaps by integrating the CPU and GPU closer together on the same package, as well as higher clock speeds. Unlike the 32-nm Westmere CPU, the graphics chip used will be based on Intel’s existing 45-nm process.

Intel 2 Chip Solution

This move will make life tougher for someone like NVIDIA, which has touted their superior graphics performance before with integrated graphics products like GeForce 9400M, which has won numerous design wins including Apple Macbook. But with graphics moving off of the chipset and directly onto the CPU itself, it’s more efficient for someone like Apple, Dell, or HP to just use the integrated graphics provided by the CPU rather than going to the expense of using an NVIDIA chipset. Fortunately Clarkdale and Arrandale support switchable graphics, so a discrete GPU could be combined with the CPU to deliver superior 3D performance when needed for apps like gaming, and then switch back to the integrated graphics to conserve power.

Finally Intel has also talked about  a renewed emphasis on packing more features–such as better graphics–into mobile chips, particularly those going into laptops.

Netbooks

Atom

My Own Thoughts.

It seems that the recession is biting Intel. How else can you explain the increased focus on the mainstream and value segments, than the extreme. Gulftown e.g. is not launching till late 2010. Intel knows that one of Core i7’s key weaknesses is cost. All Core i7 CPUs require Intel’s X58 platform, and pricey DDR3 memory, and as any enthusiast can tell you, motherboards based on Intel’s X-series chipsets have never been cheap. While X58 motherboard price have come down considerably since launch, X58 motherboards still start right around the Rs. 24000, with the price quickly going up from there on more feature-rich motherboards.

To address this issue, Intel is planning to introduce mainstream derivatives of Nehalem. These processors will utilize a new CPU socket and 5-series chipset, making them incompatible with the X58/Core i7 platform and vice versa. They’ll also utilize a dual-channel memory controller rather than the triple-channel controller used on the Core i7.

But I also believe that Intel realizes that it’s very much ahead of the competition.  AMD’s quad-core Phenom II parts are more competitive with today’s Core 2 Penryn CPUs than Nehalem, so again, there’s no rush to introduce new parts in this space when your existing lineup should be more than adequate enough to outperform the competition. Intel isn’t even bother with Quad Core versions of Arrandale & Clarksdale, it’s so far ahead.

Anyway, here is a quick summary guide for those who got lost in the tick-tock wave (Source: Wikipedia):

Typically, the same dies are used for uniprocessor (UP) and dual-processor (DP) servers, but using an extra QuickPath link for the inter-processor communication in the DP server variant


Mobile Desktop
UP Server
DP Server MP Server
Dual-Core 32 nm
Dual-Channel, PCIe, Graphics Core
Arrandale
80617
Clarkdale
80616
Quad-Core 45 nm
Dual-Channel, PCIe
Clarksfield
80607
Lynnfield
80605
Jasper Forest
80612
Quad-Core 45 nm
Triple-Channel
Bloomfield
80601
Gainestown
80602
Six-Core 32 nm
Triple-Channel
Gulftown
80613
Gulftown
80614
Eight-Core 45 nm
Triple-Channel
Beckton
80604

For the presentation:

Enterprise 2.0 – Fostering Innovation


Enterprise Social Computing is the next generation of online collaborative technologies and practices that people use within the enterprise to share knowledge, expertise, experiences and insight with each other. (Definition: IT @ Intel)

Over the last few years, as open APIs, social networking platforms, cloud computing, open identity services, sensor-driven databases (such as with GPS and OpenStreetMap), or even people (example: Amazon’s Mechanical Turk) have created open ecosystems in which anyone can participate, including business, both to contribute and to consume, the Web has become the ultimate ‘people platform’ and one that is incredibly agile too, combined with economies of scale that are very hard to match. However it has thrown up its own challenges, unpredictabilities and risks which must be dealt with both routinely and successfully.

To perform well in this changing business environment organizations have adopted a more positive mindset towards Enterprise 2.0 technologies, since many enable the empowerment of the employees, making the organization nimbler and more innovative in a very challenging world. These also serve to protect the heart and soul of the enterprise- it’s knowledge.

Some of the reasons why Enterprise 2.0 is taking off are:

Protection of Intellectual Property

Employees in all enterprises are already using open ‘insecure’ social media tools. Knowledge workers use these tools for many reasons including how they fit their lifestyles, are universally accessible, easy to use and most of all are highly empowering. However for enterprises, these lead to increased concerns about ‘intellectual property’ and other information assets. This is because many of these sites have policies that effectively require users to give up their right to privacy. Also some of the sites can lay claim ownership of all content posted on the site in perpetuity (IP nightmare), including the right to share the information with third parties meaning if employees use an external blogging or microblogging site to communicate, their posts may be read by anyone, anywhere and the sites can also lay claim to the information shared which may be confidential in nature.

Thus there is a need to define balanced security measures and controls, update use policies and ensure all employees know how to use these technologies appropriately. Additionally, if enterprises do not take up such initiatives e.g. Intel IT which provided its own social computing platform, the use of fragmented internal tools and insecure external tools will continue to grow.

Beyond IP security however enterprises have learnt that there are other reasons to give employees access to Enterprise 2.0 tools.

Spur Innovation

Rick Hutley, VP Internet Business Solutions at Cisco said “There’s a huge opportunity to leverage skills and expertise you already have in your company, but the problem is finding it”. The great promise of Enterprise 2.0 is to uncover and tap into the hidden talent of an organization. Social computing if done right can address many challenges, such as helping employees to find relevant information and expertise morequickly, increasing interactive collaboration across the enterprise, breaking down silos, spurring radical innovation and capturing the tacit knowledge of existing employees.

Amongst other things, social computing enables:

– Improvement of sharing, discovery and aggregation of information

– Finding experts fast

– Expanding network & enhance career development

– Aiding real-time collaboration

– Sharing innovative ideas

– Building communities

Attract, Develop & Retain Gen-Y As Employees

Enterprises have also realized that the ‘google generation’ comes with a different mind-set than that pervaded during the time of baby boomers and such Enterprise 2.0 tools can help attract and retain employees. It’s a known fact that in traditional organizations employees may work closely with people worldwide, but in many cases wouldn’t recognize team members if they passed them in the hall.

From closed command and control structures which garnered fear of making mistakes to this new world we are now transitioning to a work-place which is more consensus driven,

informal and requires more mentoring and exploration of options. The new workers are more accustomed to working across divisions than the previous generation which was stuck in its silos leading to massive behavioral shifts in the work-place. Thus it is via using tools such as these which can help engage the Gen Y worker, connect employees together, thereby making an enterprise even as massive as Intel feel “small” and help tackle feelings of isolation. These tools can also help mitigate the impact of a maturing workforce, help employees work more effectively over time & distance and improve speeds of finding relevant information & people.

Implementation Of Enterprise 2.0

One of the approaches towards the implementation of such can be read at IT@Intel’s, which has Intel’s own Case Study on ‘Developing An Enterprise Social Computing Strategy’. However, for those who just want to experiment with these technologies, they can set on the 2.0 path with something as simple as an internal company wide blog which can be used for a variety of purposes.

In the Future

Social computing’s new collaborative technologies will provide effective channels for communication, collaboration, teamwork, networking, and innovation and in the post internet world, this is increasingly how companies will unleash innovation within their processes and secure the best and the brightest talent.

Enterprise Social Computing is the next generation of online collaborative technologies and
practices that people use within the enterprise to share knowledge,
expertise, experiences and insight with each other. (Definition: IT @ Intel) In my previous post we took a look at why enterprises adopted a positive mindset towards Enterprise 2.0 technologies.
These enterprises are facing massive pressure to adopt these new technologies because of many reasons.
The primary reason is the protection of intellecutual property. Employees in all enterprises are already using open ‘insecure’ social media tools. Knowledge workers use these tools for many reasons including how they fit their lifestyles, are universally accessible, easy to use and most of all are highly empowering. However for enterprises, these lead to increased concerns about ‘intellectual property’ and other information assets. This is because many of these sites have privacy policies that effectively require users to give up their right to privacy. Also some of the sites can lay claim ownership of all content posted on the site in perpetuity (IP nightmare), including the right to share the information with third parties meaning if employees use an external blogging or microblogging site to communicate, their posts may be read by anyone, anywhere.
Thus there is a need to define balanced security measures and controls, update use policies and ensure all employees know how to use these technologies appropriately. Additionally, if enterprises such as the initiative taken by Intel IT will not provide a social computing platform, use of fragmented internal tools and insecure external tools will continue to grow.
Beyond IP security however there are other reasons to give employees access to Enterprise 2.0 tools. The great promise of Enterprise 2.0 is to uncover and tap into the hidden talent of an organization. Rick Hutley, VP Internet Business Solutions at Cisco said “There’s a huge opportunity to leverage skills and expertise you already have in your company, but the problem is finding it”.
Social computing if done right can address many challenges, such as helping employees to find relevant information and expertise more quickly; increasing interactive collaboration across the enterprise, breaking down silos; spurring radical innovation; and capturing the tacit knowledge of existing employees.
Amongst other things, social computing enables:
– Improvement of sharing, discovery and aggregation of information
– Finding experts fast
– Expanding network & enhance career development
– Aiding real-time collaboration
– Sharing innovative ideas
– Building community
Enterprises have also realized that the ‘google generation’ comes with a different mind-set than that pervaded during the time of baby boomers and such Enterprise 2.0 tools can help attract and retain employees. It’s a known fact that in traditional organizations employees may work closely with people worldwide, but in many cases wouldn’t recognize team members if they passed them in the hall.
From closed command and control structures which garnered fear of making mistakes to this new world which is more consensus driven, informal and require more mentoring and exploration of options. The new workers are more accustomed to working across divisions than the previous generation which was stuck in its silo leading to massive behavorial shifts. These tools help engage the Gen Y worker, connect & engage employees to make an enterprise even as massive as Intel’s own feel “small” and help tackle feelings of isolation. These also help Mitigate impact of a maturing workforce. These also help the employees work more effectively over time & distance and Improve speed of finding relevant information & people.

One of the approaches towards the implementation of such a tool can be read at http://communities.intel.com/docs/DOC-3603. However, the more traditional enterprises can set on the path with an internal company wide blog. Social computing’s new collaborative technologies will provide effective channels for communication, collaboration, teamwork, networking, and innovation and in the post internet world, this is increasingly how companies will unleash innovation within their processes and secure the best and the brightest talent in the world.

Check out the presentation below for more information on Intel’s version of social computing:

more about “Enterprise 2.0 – Social Computing II“, posted with vodpod

Enterprise 2.0 – Social Computing


In 1999, Rick Levine, Christopher LockeDoc Searls, and David Weinberger in their “The Cluetrain Manifesto”, wrote

“A powerful global conversation has begun. Through the internet, people are discovering and inventing new ways to share relevant knowledge with blinding speed. As a direct result, markets are getting smarter – and getting smarter faster than most companies.

Amongst their theses, the authors proposed the exploring of the intranets within the organizations, theorizing that intranets re-established real communication amongst employees in parallel with the impact of the internet to the marketplace (thesis 48) which will lead to a ‘hyperlinked’ organizational structure within the organization which will take the place of (or be utilized in place of) the formally documented organization chart.

Ten years on, easy connections brought about by cheap devices, modular content, and shared computing resources are having a profound impact on our global economy and social structures, fundamentally changing the way we do business. Driven by the network, communication / collaboration tools flourishing on the web, tools like YouTube, Facebook and Twitter, have changed not only how we communicate with our customers and stakeholders but also how we organize ourselves. Institutional sources like corporations, media outlets, religions, and political bodies have declined in significance with individuals increasingly take cues from one another rather than from these previous mass media outlets.

A History Of Social Ties

Social computing traces its origins to the research done in 1973 by Mark Granovetter, a sociologist now at Stanford.

Granovetters’ great insight was “The Strength of Weak Ties” (SWT) in which he proclaimed that it was weak ties which might actually be the more important ones for innovation and knowledge sharing.

Strong ties and weak ties are exactly what they sound like. Strong ties between people arise from long-term, frequent, and sustained interactions; weak ties from infrequent and more casual ones. The ‘problem’ with strong ties is that if persons A and B have a strong tie, they’re also likely to be strongly tied to all members of each other’s networks. This leads to redundancy in ideas since members tend to think alike. Weak ties however are relationships between members of different groups. These lead to a diversity of ideas as they tie together separate modes of thought.

SWT’s conclusions were that strong ties were unlikely to be bridges between networks, whilst weak ties were good bridges. These bridges helped solve problems, gather information, and import unfamiliar ideas. They help get work done quicker and better. Subsequent research has explored whether Granovetter’s hypotheses and conclusions apply within companies, and they appear to be quite robust. Weak ties have been known to help product development groups accomplish projects faster, reduce information search costs as well as greater innovation in the workplace.

Thus the ideal network for a current day knowledge worker probably consists of a core of strong ties and a large periphery of weak ones. Because weak ties by definition don’t require a lot of effort to maintain, there’s no reason not to form a lot of them (as long as they don’t come at the expense of strong ties). This is why social computing is coming to an Enterprise near you.

The Coming Era of Social Computing

According to Andrew McAfee Associate Professor, Harvard Business School, Enterprise 2.0 is the use of emergent social software platforms within companies, or between companies and their partners or customers. This technology has the potential to radically changed the way people interact with both information and one another on the Internet. What’s the value? It’s the ability to more efficiently generate, self-publish, and find information, plus share expertise in a way that’s so much easier and cheaper than earlier knowledge management attempts.

Corporate SNS lets users build a network of friends, keep abreast of what that network is up to, and even exploit it by doing things like posting a question that all friends will see all within the confines of the enterprise itself. These activities are especially highly valuable where the company is large and/or geographically distributed one where you can’t access all colleagues just by bumping into them in the hallway.

This new paradigm is about considering people as the engines of the organization and their knowledge and social capital as the fuel. A new kind a fuel that can’t be stocked, replaced or substitutable by a commodity or cheaper means of production. It’s also about a new way of looking at business. Like Lew Platt Former CEO of Hewlett-Packard said “If HP knew what HP knows, we would be three times as profitable.”.

The subsequent posts will address this field of social computing and how large enterprises are managing this transition.


IT @ Intel – Enterprise Computing


Recently Intel Malaysia held a conference to share some key insights regarding their use of IT in the Enterprise to address key business challenges.

The topics which came under discussion were:

1. How Intel Is Managing IT Through A Downturn

2. Social Computing & Sustainability

Managing IT through a downturn took a look at how to drive business productivity, the potential of solid-state drives (SSDs) to replace hard disk drives (HDDs) in the enterprise, server and data center optimization, driving employee efficiencies and continuing IT efficiencies. I will be discussing more of this in my subsequent posts.

Intel IT is implementing an enterprise-wide social computing platform that combines professional networking tools with social media such as wikis and blogs, and integrates
with existing enterprise software. Our goal is to transform collaboration across Intel, addressing top business challenges such as helping employees to find relevant information and expertise more quickly, breaking down silos; attracting and retaining new employees;
and capturing the tacit knowledge of mature employees.

On Social Computing, Intel IT is implementing an enterprise-wide social computing platform that combines professional networking tools with social media such as wikis and blogs, and integrate with existing enterprise software. The goal is to transform collaboration across Intel, addressing top business challenges such as helping employees to find relevant information and expertise more quickly, breaking down silos; attracting and retaining new employees and capturing the tacit knowledge of mature employees.

Under Sustainability, Intel centered the discussion around the Data Centers and chose topics such as Air-Flow Management and Optimization, Economizers, Rack Level Cooling, High Ambient Temp Operation, Power Efficient Systems, dc Power Distribution (something which is very new and upcoming in the DC world),  Application Productivity Link, Server Power Management, Containerization (CC Systems Integration) as well as the design engineering strategy.

This is a topic which is near my heart, so expect a lot of stuff on sustainability & social computing soon. Stay posted.

Follow

Get every new post delivered to your Inbox.

Join 1,692 other followers