Data Center is our focus

We help to build, access and manage your datacenter and server rooms

Structure Cabling

We help structure your cabling, Fiber Optic, UTP, STP and Electrical.

Get ready to the #Cloud

Start your Hyper Converged Infrastructure.

Monitor your infrastructures

Monitor your hardware, software, network (ITOM), maintain your ITSM service .

Our Great People

Great team to support happy customers.

Saturday, December 19, 2009

IT shops weigh Microsoft buy of Opalis Software

By Mark Fontecchio, News Writer
16 Dec 2009 | SearchDataCenter.com

IT pros using Opalis Software have strong opinions on the pending
Microsoft acquisition of Opalis, depending on how much they use the
data center automation software. The deeper they are into Opalis' run
book automation and job scheduling features, and the more they lean on
support, the more concerned they are.

James Hankey, vice president of IT and director of operations at
financial investment firm John G. Ullman & Associates (JGUA) in New
York, professed shock at the news announced late last week.

"I'm afraid we're going to have to end up paying more money," he said.

JGUA has been an Opalis data center automation customer for about 10
years. In 2000, the company needed to streamline processes required to
produce client reports. Hankey said JGUA spent a lot of money paying
people overtime for processes that the Opalis software performed using
run book automation. Now JGUA uses it primarily to monitor those
processes.

"Microsoft is lousy in terms of support," he said. "Other Microsoft
products we use we have to pay $250 just for a call, and we use the
support lines heavily with Opalis."

Victor Martinez, director of information systems at Kawasaki Motors
Corp., USA, has a different take on the deal. Kawasaki started using
Opalis Integration Server software to help run its online commerce
site. Most data the site needed -- including catalog, dealer and
pricing information -- sat on the company's mainframe. Kawasaki needed
an automated way to manage the transfer of that data to and from the
staging and production servers that ran the site. They used Opalis'
data center automation software to do it.

The company has since scaled back its e-commerce site, and now uses
Opalis for job scheduling process management and automation. Martinez
said he was happy to hear about the acquisition, and hopes Microsoft
bundles Opalis software up with other products so that Kawasaki has to
pay even less to use it. The company rarely uses Opalis support lines.

"Anything Microsoft gets their hands on, they commoditize," he said.
"From our standpoint, that's positive. If we were really using Opalis
in creative ways, I might have some concerns around it. Custom things
might not be built or I might not be as creative with the product as
in the past. But for us it's been kind of a sleeping giant that does a
lot of good stuff for us without a lot of work."

That seems to be the direction Microsoft could take Opalis, according
to Microsoft channel partners. The thought is that Opalis software
will be sold with Microsoft System Center, the company's Windows
management products. Partners also said Microsoft's purchase of Opalis
is a way for it to push harder into the cloud computing space.

Mark Fontecchio can be reached at mfontecchio@techtarget.com.

Wednesday, December 16, 2009

Green data center allows virtualization growth for Congress

By Alexander B. Howard, Associate Editor, SearchCompliance.com
14 Dec 2009 | SearchDataCenter.com

Skyrocketing energy costs in the data center aren't just a corporate issue. Server consolidation, utilization efficiency and virtual machine sprawl also challenge the biggest enterprise in the United States: the federal government.

Greening the Capitol
Two years ago, House Speaker Nancy Pelosi launched an initiative designed to reduce the carbon footprint of the United States Capitol. The implementation of a green data center by House Information Resources fits into this broader context. Download a full report here.


Over the past several years, the operators of the data center that supports operations at the U.S. House of Representatives reduced energy consumption by over 50%, saving taxpayers money and allowing the rollout of server virtualization to member offices.

That effort took hard work, but the return in energy efficiency and carbon footprint reduction also helped improve security, disaster recovery and brought the flexibility necessary to enable desktop virtualization in the future.

"Long before there was a green initiative, before it became fashionable, we had a power issue," said Jack Nichols, director of the Enterprise Operations department for the House Information Resources (HIR).

Eight years ago, he explained, the House data center in the Ford office building was "chock full of equipment" and was right near the capacity of finite power that could be delivered to the old structure. After years of right-sizing, consolidation and virtualization, the House data center is now saving nearly $1,000 a day in energy costs, Nichols said. Best of all, "We were able to do this virtualization and consolidation effort without an additional dollar in our budget, using planned lifecycle replacement money."

Long before there was a green initiative, before it became fashionable, we had a power issue.
Jack Nichols,
director of the Enterprise Operations department, House Information Resources



Nichols, an Air Force veteran, knows a thing or two about working under pressure. He used to be attached to the White House communications agency, where he experienced the unique pleasure of having the President of the United States looking over his shoulder while he set up a mobile network. After a tour through contracting and some time consulting for Lucent, he now manages the IT infrastructure for the House enterprise operations group.

The enterprise ops group had a tactical reason for approaching server consolidation and creating a green data center, Nichols explained in an on-site interview. "We had about 500,000 watts available to us for powering the servers, which was very near capacity," Nichols said. "Another 750,000 watts or so of power was available to us for the servers to be cooled. We needed to look at ways technology could help us reduce consumption."

The group looked at everything available to figure out what was possible. In the end, the group adopted tried-and-true best practices for reducing energy consumption in a green data center. Nichols said that they've reduced energy consumption to between 125-150,000 W, which equates to about $1,000 per day in real savings, assuming a cost of 11-12 cents per kW hour. That adds up to about $365,000 saved annually for taxpayers.

How did they reduce data center power costs?
First of all, test environments weren't being decommissioned. "We inventoried all of the servers to see what could get turned off," Nichols said. After that, they looked at whether servers were matched to engineers' needs, right-sizing servers for workloads. Nichols said that they achieved 45% power reduction in the servers chosen through right-sizing alone.

Consolidation of applications, especially in the data center's Unix environment, was the next target. His group was able to host multiple applications on a single platform, leading to a 55% energy reduction for the relevant machines.

Finally, virtualization was a significant factor. "The biggest savings here was in our test environments," said Nichols. "We realized 75% to 80% power savings there. We had about 200 servers, which we virtualized into a single rack."

Nichols was also able to significantly improve utilization rates. "In our Windows environment, for instance, most of the servers were at 2% to 5% utilization," he said. "In virtual settings, not only could we host multiple applications on a single server, we could build in a level of fault tolerance that wasn't there. That meant that we could be assured of a downtime measured in minutes, as opposed to hours or days."

"With every watt you save from a server standpoint, there's about another equal watt saved in cooling. As a result, we were able to change the way we cool the data center. We were able to remove or turn off CRAC units."

These energy savings have been all about following classic best practices. "We spent time clearing cables, avoiding barriers that inhibit air flow, setting up hot-aisle/cold-aisle arrangements, and ensuring we had proper placement of our CRACs. That's all what led to a green data center," Nichols said.

New horizons for a virtualized House: BC/DR, improved security
Creating this much room between the maximum power ceiling has opened up new possibilities for HIR to provide additional services to member offices.

"In many ways, the House is a collection of small businesses," said Nichols. "In the past, members have had physical servers in the office. We'd offer best practices for backup and maintenance." That member's office IT staff was responsible for everything, from networking to databases to Web development.

Now, power consumption reductions have led directly to the ability to virtualize members' offices. Every member office's data is uniquely encrypted to his virtual server, with a different encryption algorithm associated with each office. That unique identifier was a key step. "When you can get that hurdle cleared and we can say we can uniquely secure your data, then we can talk about centralizing," Nichols said.

That common security posture made the HIR chief information security officer happier, said Nichols. Decreased administration and capital costs pleased members too, he said, in terms of acquisition and repair. Nichols is happy about improved business continuity and disaster recovery capabilities, as centralized data is now replicated to secondary data center outside of the district

Blade servers save space, energy in this green data center
The HIR enterprise operations department was able to take two rows of individual servers and collapse them into a single blade frame rack.


Before the consolidation

That meant moving from about 300 servers to about 30. That one blade frame consumes about 20,000 W, Nichols said. His department has now created virtual servers for more than 150 member offices so far, which saves the direct cost of the server itself as well as administration and 400 to 500 W per server in each office.


After the consolidation

Nichols keeps a close eye on the potential for virtual machine sprawl, a common headache for modern data center operators. "We're under very tight constraints in terms of who has the rights to create a virtual server," he said. "We have restrictions and safeguards to limit that. In our initial foray into virtualization, there were tradeoffs between achieving the greatest consolidation ratio and achieving it in a secure fashion. We had to examine the business process behind each one of the servers. It's gotten better with third-party tools that can monitor traffic between hypervisors."

There are other possibilities under discussion now, like the potential for creating a private cloud or rolling out desktop virtualization down the road. Not every technology is going to be right for this environment, however. "If you can't clear the security hurdle for deployment, it's not going to be a good fit," said Nichols.

For now, Nichols maintains a heterogeneous virtualization environment and is keeping an eye out for other areas where the office can reduce energy consumption or improve utilization. "We want everyone competing, whether they're hardware or software vendors, so that we can get the best bang for the buck for the American taxpayer."

Alexander B. Howard is the associate editor of SearchCompliance.com at TechTarget. His work focuses on how regulations affect IT operations, including issues of data protection, privacy, security and enterprise IT strategy.

http://searchdatacenter.techtarget.com/news/article/0,289142,sid80_gci1376814,00.html?track=sy185&utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+techtarget%2FSearchdatacenter%2FItInfrastructure+%28SearchDataCenter+%3A+IT+infrastructure+news%29

Masa depan Digital Signage

Janji datangnya sang prospek klien pada jam 11 terlewatkan sampai jam
14, tapi saya senang menerima kehadiran mereka di kantor kecil kami.
Dalam diskusi singkat mengenai perusahaan, kami mengakui bahwa kami
memang memiliki fokus untuk mengembangkan Digital Signage di
Indonesia.

Yang menarik bagi saya, Digital Signage ini bukan barang baru, tapi
teknologinya terus berkembang. Secara umum terbagi menjadi 2, yaitu
hardware Digital Signage dan software Digital Signage. Kami terus
terang, lebih berfokus kepada Digital Signage berbasis software karena
lebih luas dan fleksibelnya pengembangan yang dapat dilakukan.

Digital Signage sekarang dengan mudah ditemukan dimana-mana. Di jalan,
diatas pos polisi, persimpangan jalan utama, hingga di depan lift
gedung perkantoran, ada beberapa di apartemen. Konteksnya tetap sama,
yaitu memperkenalkan atau memvisualisasi dengan menggunakan media
digital, khususnya adalah teknologi display. Teknologi ini konon
adalah teknologi yang paling mudah menyerap perhatian orang, membuat
orang teringat tentang apa yang disampaikan, tetapi juga bisa menjadi
terlalu cepat untuk dilupakan.

Oleh karena itu, visualisasi, penampilan dalam media display, yang
mungkin hanya dilihat sebentar saja itu, menjadi sangat penting. Kami
dalam beberapa tahun ini mengembangkan aplikasi finosMDS sebagai
pilihan, khususnya industri perbankan untuk menggunakan media digital
signage menggantikan media tampilan kurs berbasis led atau manual.
Tapi toh, tetap kurang maksimal. Kami sempat beralih ke industri lain
beberapa tahun yang lalu, belum mendapat respon. Tapi sekarang, luar
biasa. Dimana-mana kita temukan, bahkan ada yang menggunakan mobile
unit untuk membawa display yang luar biasa besar, guna mempublikasikan
iklan mereka.

Belakangan kami tertarik dengan Navori, perusahaan berbasis di
Switzerland yang memiliki fokus di Digital Signage Software. Kami
sedang menggodok beberapa strategi membawa produk tersebut masuk ke
Indonesia dengan mudah, meskipun cost nya tidak sedikit. Beberapa
peluang pernah hampir kami raih dengan menggunakan software ini,
tetapi belum ada yang gol secara maksimal.

Navori menawarkan konsep yang menarik tentang Digital Signage
Software, yang tentu saja, balik ke konsep awal saya, dengan mudah
dikonfigurasi, dibuatkan template, sehingga saya dapat mengganti-ganti
dengan cepat. Navori juga menantang untuk dikoneksikan dengan aplikasi
lainnya, seperti queuing system yang telah kami miliki sejak lama.

Digital Signage, mungkin suatu saat tidak hanya kita temui di media
luar ruang, atau dalam ruang yang kita masuki, tetapi mungkin juga
dalam pesawat mobilephone Anda. Who knows? Yang penting tetap sama
yang dimaksudkan menyampaikan pesan secara lebih menarik. Bahkan, ada
demand dari beberapa prospek untuk membuat Digital Signage interaktif,
jadi bisa berkomunikasi, kita bisa memilih apa yang ingin kita lihat,
kita dapat merespon. Dalam konsep kiosk, hal ini telah lazim
digunakan. Tapi dalam digital signage mungkin akan jadi hal menarik.
Belum lagi, apabila bisa dipersonalisasi. Contoh, adalah saya yang
masuk ke ruang lobby kantor saya, maka tampilan digital signage nya
adalah khusus untuk saya. Wow. unlimited ideas khan.

Ok, kita lihat saja,apakah kami berhasil memperkenalkan Navori di
Indonesia dengan lebih baik pada tahun 2010.

Sunday, December 13, 2009

ICT EXPO - JCC 14-16 DES 2009

Kepada seluruh rekan-rekan,
Bapak dan Ibu, anda semua diundang untuk dapat menghadiri acara ICT EXPO yang di selenggarakan di JCC dari tanggal 14 - 16 Desember 2009 (09:30 - 17:00), dengan cara melakukan pendaftaran secara online GRATIS di: http://www.ictindonesia.com/reg/ yang mana selain anda akan mendapat kesempatan memenangkan undian berhadiah, Bapak dan Ibu juga akan memperoleh sebuah buku catalog pameran senilai Rp. 50.000,-
Untuk mengikuti ICT FORUM 2009 silahkan klik di:

NCR acquires DVDPlay, will convert 1,300 kiosks to Blockbuster Express brand

Caroline Cooper

• 10 Dec 2009

NCR Corp. has announced its acquisition of Campbell, Calif.-based
DVDPlay, which operates approximately 1,300 DVD-rental kiosks in the
United States and Canada. In a news release, NCR says it will convert
the DVDPlay kiosks to add to its Blockbuster Express-branded line of
kiosks and is revising its installation forecasts from 2,500 to 3,800
by the end of 2009. Terms of the agreement were not disclosed.

Alex Camara, vice president and general manager of NCR Entertainment,
says DVDPlay's presence in California, Colorado and Illinois will
allow NCR to extend its DVD-rental reach to new markets, bolstering
its efforts to compete with redbox.

Our acquisition of DVDPlay accelerates NCR's growth in the
DVD-rental business as we expand our operations, technology leadership
and consumer experience in key markets with premium retail partners.
Over the past six months, we've seen tremendous enthusiasm from
consumers and retail partners for our DVD-rental kiosks. We've been
able to deploy quickly and maintain high levels of availability. This
further investment will help us bring our kiosks to even more
consumers in even more locations around the United States, especially
in major markets in California and other parts of the western U.S.

Coinstar Inc., whose redbox brand is NCR's primary competition in the
DVD-rental kiosk market, today announced it has exceeded its forecast
for 20,000 redbox kiosks installed by the end of the year.

Friday, December 04, 2009

Disaster Recovery Is All About Imagination

IT Management | Guest Opinion | Kelly Lipp, Wednesday, December 2, 2009

Within our IT-centric world, we tend to forget that disaster recovery is more – much more – than getting mission-critical data restored. In fact, getting the data back might be the easiest part of the process. Tougher is knowing what is going to happen with that data after it is restored.

Take a large disaster, for example. How will your employees gain access to that data? What about your customers? What happens if your key users are no longer available to use the data? (After all, you have just experienced a disaster.) Many of our basic assumptions are probably not correct.

And what about the smaller, more common disasters? What happens if you lose a single mission-critical system? Have you thought about how you would exist without that system for some period of time? It is these kinds of disasters that surprisingly require the most thought.

Disaster recovery is about imagination. A simple exercise involving key business and IT folks sitting around a conference table and imagining what disaster could happen -- and what might happen afterward -- can set a business on the right course. If you give thought to something before it happens, the chances of a better reaction are higher.

For this exercise to be most effective, it is essential that you involve as many people outside of IT as possible. If this is an IT-only exercise, it will be much less effective. Use your other stakeholders. Their impressions are probably different from yours; different, but equally as valuable.

The steps below will guide you in your imagination process.

1. Imagine the most likely events that will cause disruption within your data center.

For most of us there are perhaps two or three events that will wreck our ability to conduct our business. Some are geographical: hurricanes in the southeastern Unites States, earthquakes in California or tornados in the Midwest. Other problems like water main breaks, fires, etc., do not have a geographic component and can affect any of us.

Part of your exercise is to think about which of these could happen and to assess what the impact might be. Impacts include the inability to access your data center or your entire site or the unavailability of key personnel who cannot reach your site, etc.

Some events are smaller than others. For instance, an event could be as simple as losing the telephone lines into your site. This is probably more likely than the hurricane and will cause as much disruption. Focus on these. They are much more likely to occur. Much of our disaster recovery planning involves worrying about things that will not happen while ignoring those that are much more probable. Granted, the Black Swan event, the highly unlikely event, will be devastating, but do not become too focused on it. The smaller disasters will hurt just as much and are more likely to happen.

Think of as many of these events as you can, contemplate each and rank them according to how likely they are.

A good template for the discussion might be “What would we do if…” Let your imagination run wild. The more of these you think of now, the more likely you are to recover from them when they happen.

2. Determine the business impact of these events.

Again, some events have a greater impact than others. It may be that the relatively small event, like losing Internet access, has a higher impact on your business than a hurricane, especially since it is more likely to occur. In some cases, a catastrophe like the hurricane will make it impossible to conduct business afterward.

Many events will be localized. You may lose your e-mail database. The rest of the data center is up and running but your mission-critical communications application is down. What is the impact of this?

Business impact has two components: the criticality of the application and how long it will be inaccessible. List both of these components during your exercise.

Good questions are, “How much will it hurt if it is down for an hour? How much if it is down for a day?”

3. Rate the business impact from high to low.

There are lots of applications in most of our environments. Some are much more critical than others. Many of the things that can happen are simple annoyances while some can be devastating fairly quickly. Rate the impacts to your business.

Dollar impacts can often be elusive, but getting to this critical metric will be helpful in the later stages of the exercise. If you know how much one of these will cost, you will find it easier to gain funds to mitigate them. There may be a relatively inexpensive way to avoid the problem.

4. Develop a comprehensive plan to recover from each event, starting with the high impact events.

Pick the event that will have the most impact on your business. Imagine how you would maneuver around it.

Using e-mail as an example, it may be possible to use an alternative communications path. Perhaps most of your key employees have external e-mail accounts. Knowing their addresses and having a plan to switch communications to that path might be adequate to mitigate your e-mail outage.

The plan must be complete. Trying to plug the holes in your plan while in the middle of the outage does not work. The additional stress of knowing many are counting on you will not help your performance.

5. Develop the “Exist Without” process.

The outage will persist. What will you do while you cannot use that application? Will business come to a grinding halt?

It is essential to have a variety of plans in place based on the expected length of the outage. If the outage is short enough, perhaps you simply hunker down and wait it out. For longer outages, though, the business impact starts to be a problem. It is here that you need a well thought out Plan B.

Since this is an imagination exercise, you might as well think of many “exist without” scenarios. Some make more sense than others. Some are easier or harder to implement. Determine the best one to use and go with it.

6. Getting back to “Business as Usual.”

Once the application is back online, you must transition back to your normal business plan. Again, having a developed plan is important. Unwind what you’ve done and move on.

Application outages and disasters, both big and small, are part of our IT fabric. It is what we do about them that matters. If we simply spend an hour or two imagining what we would do, we will be ahead of the curve when the the disaster happens. Better yet, imagine how much better prepared you could be if you put a formal plan in place.

Time to let your imagination go wild.

IT Management | Guest Opinion | Kelly Lipp, Wednesday, December 2, 2009

Within our IT-centric world, we tend to forget that disaster recovery is more – much more – than getting mission-critical data restored. In fact, getting the data back might be the easiest part of the process. Tougher is knowing what is going to happen with that data after it is restored.

Take a large disaster, for example. How will your employees gain access to that data? What about your customers? What happens if your key users are no longer available to use the data? (After all, you have just experienced a disaster.) Many of our basic assumptions are probably not correct.

And what about the smaller, more common disasters? What happens if you lose a single mission-critical system? Have you thought about how you would exist without that system for some period of time? It is these kinds of disasters that surprisingly require the most thought.

Disaster recovery is about imagination. A simple exercise involving key business and IT folks sitting around a conference table and imagining what disaster could happen -- and what might happen afterward -- can set a business on the right course. If you give thought to something before it happens, the chances of a better reaction are higher.

For this exercise to be most effective, it is essential that you involve as many people outside of IT as possible. If this is an IT-only exercise, it will be much less effective. Use your other stakeholders. Their impressions are probably different from yours; different, but equally as valuable.

The steps below will guide you in your imagination process.

1. Imagine the most likely events that will cause disruption within your data center.

For most of us there are perhaps two or three events that will wreck our ability to conduct our business. Some are geographical: hurricanes in the southeastern Unites States, earthquakes in California or tornados in the Midwest. Other problems like water main breaks, fires, etc., do not have a geographic component and can affect any of us.

Part of your exercise is to think about which of these could happen and to assess what the impact might be. Impacts include the inability to access your data center or your entire site or the unavailability of key personnel who cannot reach your site, etc.

Some events are smaller than others. For instance, an event could be as simple as losing the telephone lines into your site. This is probably more likely than the hurricane and will cause as much disruption. Focus on these. They are much more likely to occur. Much of our disaster recovery planning involves worrying about things that will not happen while ignoring those that are much more probable. Granted, the Black Swan event, the highly unlikely event, will be devastating, but do not become too focused on it. The smaller disasters will hurt just as much and are more likely to happen.

Think of as many of these events as you can, contemplate each and rank them according to how likely they are.

A good template for the discussion might be “What would we do if…” Let your imagination run wild. The more of these you think of now, the more likely you are to recover from them when they happen.

2. Determine the business impact of these events.

Again, some events have a greater impact than others. It may be that the relatively small event, like losing Internet access, has a higher impact on your business than a hurricane, especially since it is more likely to occur. In some cases, a catastrophe like the hurricane will make it impossible to conduct business afterward.

Many events will be localized. You may lose your e-mail database. The rest of the data center is up and running but your mission-critical communications application is down. What is the impact of this?

Business impact has two components: the criticality of the application and how long it will be inaccessible. List both of these components during your exercise.

Good questions are, “How much will it hurt if it is down for an hour? How much if it is down for a day?”

3. Rate the business impact from high to low.

There are lots of applications in most of our environments. Some are much more critical than others. Many of the things that can happen are simple annoyances while some can be devastating fairly quickly. Rate the impacts to your business.

Dollar impacts can often be elusive, but getting to this critical metric will be helpful in the later stages of the exercise. If you know how much one of these will cost, you will find it easier to gain funds to mitigate them. There may be a relatively inexpensive way to avoid the problem.

4. Develop a comprehensive plan to recover from each event, starting with the high impact events.

Pick the event that will have the most impact on your business. Imagine how you would maneuver around it.

Using e-mail as an example, it may be possible to use an alternative communications path. Perhaps most of your key employees have external e-mail accounts. Knowing their addresses and having a plan to switch communications to that path might be adequate to mitigate your e-mail outage.

The plan must be complete. Trying to plug the holes in your plan while in the middle of the outage does not work. The additional stress of knowing many are counting on you will not help your performance.

5. Develop the “Exist Without” process.

The outage will persist. What will you do while you cannot use that application? Will business come to a grinding halt?

It is essential to have a variety of plans in place based on the expected length of the outage. If the outage is short enough, perhaps you simply hunker down and wait it out. For longer outages, though, the business impact starts to be a problem. It is here that you need a well thought out Plan B.

Since this is an imagination exercise, you might as well think of many “exist without” scenarios. Some make more sense than others. Some are easier or harder to implement. Determine the best one to use and go with it.

6. Getting back to “Business as Usual.”

Once the application is back online, you must transition back to your normal business plan. Again, having a developed plan is important. Unwind what you’ve done and move on.

Application outages and disasters, both big and small, are part of our IT fabric. It is what we do about them that matters. If we simply spend an hour or two imagining what we would do, we will be ahead of the curve when the the disaster happens. Better yet, imagine how much better prepared you could be if you put a formal plan in place.

Time to let your imagination go wild.

Monday, November 16, 2009

10 reasons why Windows 7's XP Mode is a big deal

* Date: August 3rd, 2009
* Author: Brien Posey
* Category: 10 things

Windows 7 features a new twist: XP Mode, which lets you run your
Windows XP apps without compatibility issues. Brien Posey explains why
XP Mode is significant and outlines its benefits.

One of the most exciting Windows 7 features is Windows XP Mode. It
uses a brand new version of Virtual PC to provide seamless access to
Windows XP applications, either through a virtual Windows XP desktop
or directly through the Windows 7 desktop. Here's a look at some of
the benefits XP Mode offers.

Note: This article is also available as a PDF download.
1: It solves compatibility problems

The biggest beef that most IT folks seem to have with Windows Vista is
its notorious hardware and software compatibility problems. Windows
7's Windows XP mode allows you to run Windows XP applications without
worrying about application compatibility.
2: It provides a much needed upgrade to Virtual PC

Virtual PC has been around for a long time, and although it has
improved from one version to the next, it still leaves a lot to be
desired. Among the improvements in the new version is the ability to
access the computer's physical hard drives (including the host
operating system's volumes) through a virtual machine.
3: It offers USB Support

Another much needed improvement to Virtual PC (which Windows XP Mode
depends on) is that it now offers USB support. It has previously been
impossible to access USB devices from within a virtual machine.
4: It's a way to modernize Windows XP

I know that there are those who would disagree with me, but Windows XP
hasn't aged well. First introduced in 2001, Windows XP is quickly
becoming outdated. Windows XP Mode provides enables you to run Windows
XP inside a modern operating system, which helps it take advantage of
some of the improvements that have been made to things like hardware
support and security. Windows XP itself hasn't changed, but because
Windows XP Mode is dependent on the host operating system, it can reap
some of these benefits.
5: It ensures long-term technical support

Microsoft's continued support for Windows XP has been questionable for
quite some time now. Every time Microsoft gets ready to pull the plug
on main stream technical support, they give in to pressure from
customers and extend the support period. It's great that Microsoft has
been so accommodating, but nobody knows how long that will last.
Having Windows XP Mode built into Windows 7 helps ensure that Windows
XP support will be available for many years to come.
6: Microsoft has made a commitment to XP

For the last several years, Microsoft has urged customers to adopt
Windows Vista, but most of Microsoft's corporate customers have chosen
to continue using Windows XP. By including Windows XP mode in Windows
7, Microsoft has finally acknowledged the importance of Windows XP to
its customers and given diehard XP fans a real solution that will
allow them to move forward without giving up the OS they've depended
on for almost a decade.
7: It offers seamless integration

One of my favorite things about Windows XP Mode is that it's
completely seamless. Sure, you can work within a full-blown Windows XP
virtual machine, but you don't have to. In fact, if you close the
Windows XP virtual machine, you can access your Windows XP
applications directly through the Windows 7 start menu and run those
applications seamlessly alongside applications that are installed
directly on Windows 7.
8: It's a first

This is the first time Microsoft has ever given us this type of
support for an older product. Exchange 2000 included a copy of
Exchange 5.5, but that was only included as part of the migration path
for Exchange 5.0 users. Microsoft wasn't expecting customers to
actually use both products. Making Windows XP part of the Windows 7
operating system is unprecedented.
9: It opens the door to lightweight operating systems

Windows has always had a bad reputation for being excessively bloated.
One of the reasons for the bloat is that most versions of Windows have
included a significant amount of code to provide backward
compatibility with the previous version. By relying on virtualization
to provide this compatibility, Microsoft may be able to greatly reduce
the size of the core operating system in Windows 8.
10: Future plug-ins are possible

The way Microsoft has connected Windows XP to Windows 7 through
virtualization opens the door to future operating system plug-ins.
Don't be surprised if Windows 8 gives you the ability to pick and
choose the legacy operating systems you want to support. Microsoft
could end up offering virtualization plug-ins that will allow it to
support Windows XP, Vista, and Windows 7. Using this method would
allow customers to pick the type of backward compatibility they need
without having to install any unnecessary legacy code.

Saturday, November 14, 2009

finally, 3Com acquired by HP -- who's next?

I received this email below:


To all 3Com Partners,

As a valued member of our 3Com Focus Partner Program, I wanted to share
with you some exciting news that I believe will help generate even more
momentum for our joint efforts in selling to enterprise accounts. Together,
we've set our sights on disrupting the networking market with our
"China Out" strategy by leveraging our market leadership in China to offer
customers a best-in-class price/performance advantage with a lower TCO,
a broad, modern product portfolio and a new level of customer
relationships.

Yesterday on November 11, 2009, we announced our plans to accelerate our
strategy by signing a definitive agreement to be acquired by HP. This is an
exciting opportunity to form a powerhouse that will disrupt the industry by
offering customers an unprecedented option for data center and network
infrastructure solutions. Never before has there been a networking company
with such a broad and modern, open standards-based product portfolio with
the channel reach and investment capabilities that together, HP and 3Com
will have.

The beauty of this transaction is our respective portfolios are extremely
complementary in terms of products, geographies and channels. We also
share a similar focus on simplifying the network and driving significant
TCO reductions. The combination leverages each company's
strengths: our China market position and broad integrated product
portfolio, including the expansive H3C enterprise networking and
TippingPoint security portfolios that has been consistently gaining market
share across the globe; and HP's PC SME portfolio and data center
solutions. What this means for you is you will have access to an even
broader set of network infrastructure solutions and benefit from the
company's global presence and world-class services and support
organization.

Be assured, we remain dedicated to teaming with partners such as you
-- the best partners in our industry -- who understand the needs of
enterprises and to enabling them to meet the high standard of service and
support 3Com and HP are committed to delivering. We will finalize plans
around the combined company's go-to-market strategy and channel programs
throughout the integration process.

Today, the process has begun to secure the approvals required to finalize
the acquisition. Until the merger receives all regulatory approvals and the
acquisition closes, HP and 3Com will continue to operate as two companies.
Prior to the deal closing, partners should continue to sell those products
they currently offer. As such, you should continue to work with your
existing sales team. You can expect further communication from us when the
transaction is finalized.

With today's news, I believe we are creating the most powerful, disruptive
force in the networking industry. Both HP and 3Com are fully committed to
making this acquisition and subsequent integration seamless for you.
Importantly, we will continue to invest in our business in order to
continue to deliver the innovative networking and data center solutions
you've come to expect from us.

Regards

Rose Chen
VP & GM of Asia Pacific
3Com Corporation

This e-mail has been sent to you by 3Com. From time to time,
3Com would like to tell you about products, offers, technology
and software developments, which we think would be of interest
to you. If you do not wish to receive similar e-mails from 3Com,
REPLY and write UNSUBSCRIBE as the first word in the subject
line. If you do not follow these directions your name may not
be suppressed from related 3Com e-mail campaigns.

Friday, November 13, 2009

HP's buyout of 3Com continues IT convergence push

By Mark Fontecchio, News Writer
12 Nov 2009 | SearchDataCenter.com

IT infrastructure news
Digg This! StumbleUpon Toolbar StumbleUpon Bookmark with Delicious Del.icio.us Add to Google

IT pros say Hewlett-Packard Co.'s surprise decision to buy networking company 3Com continues vendor consolidation in the IT industry, which may be a good--and a bad--thing.


It could provide economies of scale and greater integration of IT gear but also consolidates more IT firepower in fewer vendor hands and that may not be advantageous for IT customers.

"I suppose there are a couple ways to look at mergers like these," said Clive Greenall, IT facilities manager at the Standard Bank of South Africa. "The companies are consolidating skills under one roof, which may be a good thing if you're looking for a one-stop solution, and presumably they'll keep the best skills from the consolidation.

"The other side of the coin could be price fixing as a result of less competition, certain arrogance toward servicing client bases -- take it or leave it -- and job losses as the duplication of skills and responsibilities is addressed," he added.
Pushing toward converged data center hardware
Illuminata Inc. analyst Gordon Haff said end users have been clamoring to get away from "the erector set approach" to IT: that is, buying servers, networks and storage separately, configuring them the best they can, and hoping they all play nice with one another. On the flipside, there is concern about vendor lock-in.

HP's $2.7 billion bid for 3Com move was in part driven by Cisco's aggressive push into the data center where the two companies compete more and more directly with their respective sets of converged hardware that combine servers, networking hardware and storage in one box.

"Every vendor has their strong points, and just because a systems vendor makes its own brand-name storage or networking gear doesn't mean you'll be getting the best quality," said Charles King, analyst at Pund-IT Inc. "Businesses need to be careful with this idea of working with an integrated systems vendor."

IT vendor convergence has been the name of the game over the past year. HP bought IT services giant EDS last year and now plans to add 3Com. Oracle is still working on its $7.4 billion acquisition of Sun Microsystems.

Cisco recently rolled out its Unified Computing System (UCS), with HP responding with BladeSystem Matrix. And just last week, Cisco, EMC and VMware announced a partnership to offer their take of converged infrastructure under architecture called vBlock.
Integration upside offset by fear of vendor lockin
"The pendulum is swinging back toward a bigger and more vertically integrated set of vendors," Haff said. "What the individual combinations look like varies a bit, of course."

Earlier this year, when Cisco rolled out its UCS, some IT pros expressed worry about overreliance on a single vendor.

"To be completely honest, when I first heard about that system, all I could think of is vendor lock-in," said Kyle Rankin, a systems architect at QuinStreet, a Foster City, Calif.-based marketing company.

Rankin added that "it's going to be a tough sell for a lot of people who have large-scale server footprints already."

King said it's not unusual to go into a data center and see racks of different vendors' equipment sitting right next to one another. Oftentimes IT pros will just buy what they need, when they need it, and what's on sale.

"This idea of a single overarching vendor that clients will dedicate themselves [to] can be an anomaly," he said.

Still, Haff said that concerns about vendor lock-in today are nothing compared with 20 years ago.

"The fact that HP can offer you converged infrastructure doesn't keep you from buying a ProLiant server, using Cisco networking gear and EMC storage, and running Microsoft Windows on the ProLiant," he said.

"If you go back 20 years, uh uh. If you were going to buy a computer system from Digital Equipment, you most likely had to buy a bunch of other things from Digital Equipment. Even if you accept the notion that we're moving back toward a more vertical company structure today, the fact is you still have the capability to mix and match if that is your choice."


----------------------------------------------------------------------------------------------
Silahkan siap2 yang pakai 3Com -- utk beralih ke HP Procurve !

Thursday, November 12, 2009

Sudah coba XenServer Citrix ??

XenServer Highlights

Transform your datacenter into a more dynamic server workload delivery center – free – with Citrix XenServer.

XenServer is based on Xen® – the open source hypervisor that’s supported by Intel, AMD, HP and more than forty other organizations.
XenServer is easy to deploy, and its wizard-based controls and advanced capabilities mean more servers per administrator and zero-downtime for upgrades.
XenMotion enables the live migration of any type of workload to any server with zero downtime and maximum resource utilization.
XenServer is ideal for I/O intensive workloads like Citrix XenApp™, Microsoft SQL Server and Microsoft Exchange.

Citrix Essentials Highlights

With XenServer, you get unmatched enterprise-class features – for free. And when you’re ready for advanced virtualization management, just add Citrix Essentials for XenServer. By doing so, you’ll benefit from:

Automated lab management streamlines the process of building, testing, sharing and delivering throughout the application lifecycle, from development labs into the production environment.
Advanced storage integration featuring Citrix StorageLink™ technology exposes the advanced data and storage management features in today’s storage systems directly to a virtualized environment.
Dynamic provisioning services for on-demand deployment of workloads to any combination of virtual machines or physical servers from a single golden image.
Workflow orchestration for simplified scripting and automation of key management processes.
High availability for automatic restart and intelligent placement of virtual machines in case of failure of guest systems or physical servers.

Get started with XenServer – it's free!!

Defining an Enterprise Security Strategy (ctoEdge)

Defining an Enterprise Security Strategy
Security | How-To | Shaun Hummel, Saturday, October 17, 2009
Tags: anti-virus solutions, Authentication Systems, Cisco Systems, Cybercrime, IBM, Intrusion-Prevention Systems, network security, security policy, usage management and monitoring, VPN, Vulnerability Assessment

There are five primary security groups that should be considered with any enterprise security model. These include security policy, perimeter, network, transaction and monitoring security. These are all part of any effective company security strategy.
Any enterprise network has a perimeter that represents all equipment and circuits that connect to external networks, both public and private. The internal network is comprised of all the servers, applications, data, and devices used for company operations. The demilitarized zone (DMZ) represents a location between the internal network and the perimeter comprised of firewalls and public servers. It allows some access for external users to those network servers and denies traffic that would get to internal servers. That doesn't mean that all external users will be denied access to internal networks. On the contrary, a proper security strategy specifies who can access what and from where.
For instance, telecommuters will use VPN concentrators at the perimeter to access Windows and UNIX servers. Business partners could use an Extranet VPN connection for access to the company S/390 Mainframe. Define what security is required at all servers to protect company applications and files.

Identify transaction protocols required to secure data as it travels across secure and non-secure network segments. Monitoring activities should then be defined that examine packets in real time as a defensive and proactive strategy for protecting against internal and external attacks. A recent survey revealed that internal attacks from disgruntled employees and consultants are more prevalent than hacker attacks. Virus detection should then be addressed, since allowed sessions could be carrying a virus at the application layer with an e-mail or a file transfer.

Security Policy Document
The security policy document describes various policies for all employees that use the enterprise network. It specifies what an employee is permitted to do and with what resources. The policy includes non-employees, such as consultants, business partners, clients and terminated employees. In addition, security policies are defined for Internet e-mail and virus detection. It defines what cyclical process, if any, is used for examining and improving security.

Perimeter Security
This describes a first line of defense that external users must deal with before authenticating to the network. It is security for traffic whose source and destination is an external network. Many components are used to secure the perimeter of a network. The assessment reviews all perimeter devices currently utilized. Typical perimeter devices are firewalls, external routers, TACACS servers, RADIUS servers, dial servers, VPN concentrators and modems.

Network Security
This is defined as all the server and legacy host security that is implemented for authenticating and authorizing internal and external employees. When a user has been authenticated through perimeter security, it is the security that must be dealt with before starting any applications. The network exists to carry traffic between workstations and network applications. Network applications are implemented on a shared server that could be running an operating system such as Windows, UNIX or Mainframe MVS. It is the responsibility of the operating system to store data, respond to requests for data, and maintain security for that data.
Once users are authenticated to a Windows ADS domain with a specific user account, they have privileges that have been granted to that account. Such privileges would be to access specific directories at one or many servers, start applications, and administer some or all of the Windows servers. When the user authenticates to the Windows Active Directory Services, it is not distributed to any specific server. There are tremendous management and availability advantages to that, since all accounts are managed from a centralized perspective and security database copies are maintained at various servers across the network. UNIX and Mainframe hosts will usually require logon to a specific system, however, the network rights could be distributed to many hosts.
• Network operating system domain authentication and authorization
• Windows Active Directory Services authentication and authorization
• UNIX and Mainframe host authentication and authorization
• Application authorization per server
• File and data authorization

Transaction Security
Transaction security works from a dynamic perspective. It attempts to secure each session with five primary activities. They are non-repudiation, integrity, authentication, confidentiality and virus detection. Transaction security ensures that session data is secure before being transported across the enterprise or Internet. This is important when dealing with the Internet, since data is vulnerable to those that would use the valuable information without permission. E-Commerce employs some industry standards such as SET and SSL, which describe a set of protocols that provide non-repudiation, integrity, authentication and confidentiality. Virus detection provides transaction security by examining data files for signs of virus infection before they are transported to an internal user or before they are sent across the Internet. The following describes industry standard transaction security protocols.
• Non-Repudiation - RSA Digital Signatures
• Integrity - MD5 Route Authentication
• Authentication - Digital Certificates
• Confidentiality - IPSec/IKE/3DES
• Virus Detection - McAfee/Norton Antivirus Software

Monitoring Security
Monitoring network traffic for security attacks, vulnerabilities and unusual events is essential for any security strategy. This assessment identifies what strategies and applications are being employed. The following list describes some typical monitoring solutions.
• Intrusion detection sensors are available for monitoring real-time traffic as it arrives at your perimeter. IBM Internet Security Scanner is an excellent vulnerability assessment testing tool that should be considered for your organization.
• Syslog server messaging is a standard UNIX program found at many companies that writes security events to a log file for examination. It is important to have audit trails to record network changes and assist with isolating security issues.
• Big companies that utilize a lot of analog dial lines for modems sometimes employ dial scanners to determine open lines that could be exploited by security hackers.
• Facilities security is typical badge access to equipment and servers that host mission-critical data. Badge access systems record the date/time that each specific employee entered the telecom room and left.
• Cameras sometimes record what specific activities were conducted as well.
Intrusion Prevention Sensors (IPS): Cisco markets intrusion prevention sensors (IPS) to enterprise clients for improving the security posture of the company network. Cisco IPS 4200 series utilize sensors at strategic locations on the inside and outside network, protecting switches, routers and servers from hackers. IPS sensors will examine network traffic in real time or inline, comparing packets with pre-defined signatures. If the sensor detects suspicious behavior, it will send an alarm, drop the packet, and take some evasive action to counter the attack. The IPS sensor can be deployed inline IPS, IDS where traffic doesn't flow through device or a hybrid device. Most sensors inside the data center network will be designated IPS mode with its dynamic security features thwarting attacks as soon as they occur. Note that IOS intrusion prevention software is available today with routers as an option.
Vulnerability Assessment Testing (VAST): IBM Internet Security Scanner (ISS) is a vulnerability assessment scanner focused on enterprise customers for assessing network vulnerabilities from an external and internal perspective. The software runs on agents and scans various network devices and servers for known security holes and potential vulnerabilities. The process is comprised of network discovery, data collection, analysis and reports. Data is collected from routers, switches, servers, firewalls, workstations, operating systems and network services. Potential vulnerabilities are verified through non-destructive testing and recommendations made for correcting any security problems. There is a reporting facility available with the scanner that presents the information findings to company staff.
Syslog Server Messaging: Cisco IOS has a UNIX program called Syslog that reports on a variety of device activities and error conditions. Most routers and switches generate Syslog messages, which are sent to a designated UNIX workstation for review. If your Network Management Console (NMS) is using the Windows platform, there are utilities that allow viewing of log files and sending Syslog files between a UNIX and Windows NMS.

Defining an Enterprise Security Strategy (ctoEdge)

Defining an Enterprise Security Strategy

Security | How-To | Shaun Hummel, Saturday, October 17, 2009

Tags: anti-virus solutions, Authentication Systems, Cisco Systems, Cybercrime, IBM, Intrusion-Prevention Systems, network security, security policy, usage management and monitoring, VPN, Vulnerability Assessment

There are five primary security groups that should be considered with any enterprise security model. These include security policy, perimeter, network, transaction and monitoring security. These are all part of any effective company security strategy.

Any enterprise network has a perimeter that represents all equipment and circuits that connect to external networks, both public and private. The internal network is comprised of all the servers, applications, data, and devices used for company operations. The demilitarized zone (DMZ) represents a location between the internal network and the perimeter comprised of firewalls and public servers. It allows some access for external users to those network servers and denies traffic that would get to internal servers. That doesn't mean that all external users will be denied access to internal networks. On the contrary, a proper security strategy specifies who can access what and from where.

For instance, telecommuters will use VPN concentrators at the perimeter to access Windows and UNIX servers. Business partners could use an Extranet VPN connection for access to the company S/390 Mainframe. Define what security is required at all servers to protect company applications and files.

Identify transaction protocols required to secure data as it travels across secure and non-secure network segments. Monitoring activities should then be defined that examine packets in real time as a defensive and proactive strategy for protecting against internal and external attacks. A recent survey revealed that internal attacks from disgruntled employees and consultants are more prevalent than hacker attacks. Virus detection should then be addressed, since allowed sessions could be carrying a virus at the application layer with an e-mail or a file transfer.

Security Policy Document

The security policy document describes various policies for all employees that use the enterprise network. It specifies what an employee is permitted to do and with what resources. The policy includes non-employees, such as consultants, business partners, clients and terminated employees. In addition, security policies are defined for Internet e-mail and virus detection. It defines what cyclical process, if any, is used for examining and improving security.

Perimeter Security

This describes a first line of defense that external users must deal with before authenticating to the network. It is security for traffic whose source and destination is an external network. Many components are used to secure the perimeter of a network. The assessment reviews all perimeter devices currently utilized. Typical perimeter devices are firewalls, external routers, TACACS servers, RADIUS servers, dial servers, VPN concentrators and modems.

Network Security

This is defined as all the server and legacy host security that is implemented for authenticating and authorizing internal and external employees. When a user has been authenticated through perimeter security, it is the security that must be dealt with before starting any applications. The network exists to carry traffic between workstations and network applications. Network applications are implemented on a shared server that could be running an operating system such as Windows, UNIX or Mainframe MVS. It is the responsibility of the operating system to store data, respond to requests for data, and maintain security for that data.

Once users are authenticated to a Windows ADS domain with a specific user account, they have privileges that have been granted to that account. Such privileges would be to access specific directories at one or many servers, start applications, and administer some or all of the Windows servers. When the user authenticates to the Windows Active Directory Services, it is not distributed to any specific server. There are tremendous management and availability advantages to that, since all accounts are managed from a centralized perspective and security database copies are maintained at various servers across the network. UNIX and Mainframe hosts will usually require logon to a specific system, however, the network rights could be distributed to many hosts.

  • Network operating system domain authentication and authorization
  • Windows Active Directory Services authentication and authorization
  • UNIX and Mainframe host authentication and authorization
  • Application authorization per server
  • File and data authorization

Transaction Security

Transaction security works from a dynamic perspective. It attempts to secure each session with five primary activities. They are non-repudiation, integrity, authentication, confidentiality and virus detection. Transaction security ensures that session data is secure before being transported across the enterprise or Internet. This is important when dealing with the Internet, since data is vulnerable to those that would use the valuable information without permission. E-Commerce employs some industry standards such as SET and SSL, which describe a set of protocols that provide non-repudiation, integrity, authentication and confidentiality. Virus detection provides transaction security by examining data files for signs of virus infection before they are transported to an internal user or before they are sent across the Internet. The following describes industry standard transaction security protocols.

  • Non-Repudiation - RSA Digital Signatures
  • Integrity - MD5 Route Authentication
  • Authentication - Digital Certificates
  • Confidentiality - IPSec/IKE/3DES
  • Virus Detection - McAfee/Norton Antivirus Software

Monitoring Security

Monitoring network traffic for security attacks, vulnerabilities and unusual events is essential for any security strategy. This assessment identifies what strategies and applications are being employed. The following list describes some typical monitoring solutions.

  • Intrusion detection sensors are available for monitoring real-time traffic as it arrives at your perimeter. IBM Internet Security Scanner is an excellent vulnerability assessment testing tool that should be considered for your organization.
  • Syslog server messaging is a standard UNIX program found at many companies that writes security events to a log file for examination. It is important to have audit trails to record network changes and assist with isolating security issues.
  • Big companies that utilize a lot of analog dial lines for modems sometimes employ dial scanners to determine open lines that could be exploited by security hackers.
  • Facilities security is typical badge access to equipment and servers that host mission-critical data. Badge access systems record the date/time that each specific employee entered the telecom room and left.
  • Cameras sometimes record what specific activities were conducted as well.

Intrusion Prevention Sensors (IPS): Cisco markets intrusion prevention sensors (IPS) to enterprise clients for improving the security posture of the company network. Cisco IPS 4200 series utilize sensors at strategic locations on the inside and outside network, protecting switches, routers and servers from hackers. IPS sensors will examine network traffic in real time or inline, comparing packets with pre-defined signatures. If the sensor detects suspicious behavior, it will send an alarm, drop the packet, and take some evasive action to counter the attack. The IPS sensor can be deployed inline IPS, IDS where traffic doesn't flow through device or a hybrid device. Most sensors inside the data center network will be designated IPS mode with its dynamic security features thwarting attacks as soon as they occur. Note that IOS intrusion prevention software is available today with routers as an option.

Vulnerability Assessment Testing (VAST): IBM Internet Security Scanner (ISS) is a vulnerability assessment scanner focused on enterprise customers for assessing network vulnerabilities from an external and internal perspective. The software runs on agents and scans various network devices and servers for known security holes and potential vulnerabilities. The process is comprised of network discovery, data collection, analysis and reports. Data is collected from routers, switches, servers, firewalls, workstations, operating systems and network services. Potential vulnerabilities are verified through non-destructive testing and recommendations made for correcting any security problems. There is a reporting facility available with the scanner that presents the information findings to company staff.

Syslog Server Messaging: Cisco IOS has a UNIX program called Syslog that reports on a variety of device activities and error conditions. Most routers and switches generate Syslog messages, which are sent to a designated UNIX workstation for review. If your Network Management Console (NMS) is using the Windows platform, there are utilities that allow viewing of log files and sending Syslog files between a UNIX and Windows NMS.