Data Center is our focus

We help to build, access and manage your datacenter and server rooms

Structure Cabling

We help structure your cabling, Fiber Optic, UTP, STP and Electrical.

Get ready to the #Cloud

Start your Hyper Converged Infrastructure.

Monitor your infrastructures

Monitor your hardware, software, network (ITOM), maintain your ITSM service .

Our Great People

Great team to support happy customers.

Saturday, August 10, 2013

Membuat layanan Cloud storage dengan OwnCloud

Banyak artikel di Internet yang menggunakan OwnCloud untuk membuat layanan Cloud storage sendiri. Diantaranya ini.

Sumber: http://www.cloudindonesia.or.id/membuat-layanan-cloud-storage-sendiri-dengan-owncloud.html




Saat ini layanan Cloud Storage sudah banyak bertebaran di internet, dari yang gratisan hingga berbayar. Contoh yang populer saat ini adalahDropbox yang menyediakan space sebesar 2GB secara cuma-cuma yang bisa diupgrade hingga maksimal 18GB, kemudian ada lagi Google DriveSugarSyncSpiderOak dan Microsoft SkyDrive. Semuanya memiliki kelebihan dan kekurangannya masing-masing. Untuk lebih jelas mengenai perbandingan layanan Cloud Storage dapat anda baca di artikel “Perbandingan Beberapa Cloud Storage“.
Nah di tutorial kali ini kita akan mencoba membuat layanan Cloud Storage sendiri yang dapat  digunakan untuk pribadi, komunitas, maupun lembaga atau perusahaan. Disini kita akan menggunakan sebuah CMS (Content Management System) yang memang khusus dibuat untuk layanan Cloud Storage mirip Dropbox atau Google Drive, CMS tersebut adalahownCloud yang dapat diunduh secara cuma-cuma dan merupakan salah satu perangkat lunak sumber terbuka (Open Source). Daftar lengkap fitur dari ownCloud dapat ditemukan disini.
OwnCloud termasuk dalam kategori Infrastructure as a Service (IaaS) Layanan awan. Dengan ownCloud kita dapat menyimpan file, folder, kontak, audio, galeri foto, kalender dan dokumen lainnya. Kita juga dapat mengakses file dan melakukan sinkronisasi file yang terdapat pada server ownCloud dengan perangkat mobile, desktop, atau peramba web.
Catatan: Untuk tutorial instalasi web server (Apache, MySQL, PostgreSQL, PHP) tidak akan dijelaskan disini, saya anggap mesin yang akan kita gunakan sebagai tempat instalasi ownCloud sudah terdapat paket-paket dependensi yang diperlukan oleh ownCloud. 
Baik, langsung saja berikut adalah step-by-step cara menginstalasi ownCloud.
Langkah pertama adalah memastikan bahwa di environment server kita sudah terinstal software berikut:
  • Apache HTTP Server versi 2 keatas
  • PHP versi 5.1 keatas : php5 php5-json php-xml php-mbstring php5-zip php5-gd php5-sqlite curl libcurl3 libcurl3-dev php5-curl php-pdo
  • Untuk database dapat menggunakan SQLite, MySQL 5.1 keatas, atau PostgreSQL 8 keatas
Untuk sistem operasinya sendiri dapat menggunakan GNU Linux, Microsoft Windows, Solaris, MacOSX maupun keluarga BSD (FreeBSD, NetBSD, OpenBSD, dll) selama terdapat web server HTTP dan PHP serta database engine (SQLite, MySQL, PostgreSQL). ownCloud juga mendukung autentifikasi pengguna berdasarkan LDAP.
Setelah yakin mesin yang akan kita gunakan sudah memenuhi persyaratan diatas maka langkah selanjutnya adalah mengunduh paket ownCloud di alamat berikut:
Untuk pengguna linux dapat menggunakan perintah berikut:
wget -qO – “http://owncloud.org/owncloud-download-4-0-0″ | tar zjvf -
cp -r owncloud/* /path/tempat/webserver
Catatan: yang berwarna dilahkan disesuaikan sesuai dengan direktori public_html anda masing-masing.
Contoh:
  • CentOS / Fedora  :  /var/www/html
  • Debian / Ubuntu   :  /var/www
Setelah tersalin selanjutnya kita akses alamat hostnya, misal dalam contoh kali ini saya mengunakan localhost, sehingga akan tampil halaman untuk membuat sebuah akun administrator seperti berikut:
Membuat Akun Administrator ownCloud
Silahkan klik pada menu “Advanced” untuk mengubah direktori tempat data akan disimpan dan tentukan database yang akan digunakan apakah SQLite, MySQL atau PostgreSQL. Saran saya jika data atau penggunanya tidak terlalu banyak kita bisa menggunakan SQLite, sedangkan jika datanya besar maka gunakan MySQL atau PostgreSQL. Jika kita menggunakan MySQL atau PostgreSQL sebagai databasenya, maka sebelumnya kita harus membuatkan databasenya terlebih dahulu.
Untuk membuat database di MySQL beserta penggunanya dapat menggunakan query berikut:
CREATE DATABASE owncloud;
GRANT ALL ON owncloud.* TO ‘dbuser‘@’localhost‘ IDENTIFIED BY ‘dbpass‘;
FLUSH PRIVILEGES;
Sedangkan untuk PostgreSQL sebagai berikut:
CREATE USER dbuser WITH PASSWORD ‘dbpass‘;
CREATE DATABASE owncloud OWNER dbuserENCODING ‘UTF8′;
GRANT ALL PRIVILEGES ON DATABASE owncloud TOdbuser;
Catatan: silahkan sesuaikan yang saya beri warna merah tebal.
Kemudian isi form untuk koneksi database pada instalasi ownCloud dengan nama database, pengguna dan kata sandi database yang telah kita buat tadi.
Setup Database ownCloud
Setelah terisi dengan benar selanjutnya klik “Finish“. Maka ownCloud akan membuatkan struktur tabel pada database dan memasukkan satu akun administrator yang tadi kita buat. Berikut adalah tabel yang dibuat oleh ownCloud:
Nah instalasi ownCloud sudah selesai, tapi ada satu masalah yang harus diatasi. Pada saat pertama kali kita menjalankan ownCloud yang sudah kita instal akan muncul galat seperti ini:
Cannot modify header information – headers already sent by (output started at …….
Jangan panik karena itu memang salah satu bugs kecil dari versi ownCloud yang kita gunakan ini. Untuk mengatasinya cukup mudah, buka file berikut ini dengan menggunakan text editor:
/path/tempat/instalasi/owncloud/apps/files_odfviewer/appinfo/app.php
Kemudian hapus whitespace pada akhir baris kodenya hingga akhir penutup tag PHP.
<?php
OCP\Util::addStyle( ‘files_odfviewer’, ‘webodf’ );
OCP\Util::addStyle( ‘files_odfviewer’, ‘odfviewer’ );
OCP\Util::addScript(‘files_odfviewer’, ‘viewer’ );
OCP\Util::addScript(‘files_odfviewer’, ‘webodf’);
?>
[THIS IS A BLANK LINE]
Ubah menjadi seperti berikut ini:
<?php
OCP\Util::addStyle( ‘files_odfviewer’, ‘webodf’ );
OCP\Util::addStyle( ‘files_odfviewer’, ‘odfviewer’ );
OCP\Util::addScript(‘files_odfviewer’, ‘viewer’ );
OCP\Util::addScript(‘files_odfviewer’, ‘webodf’);
?>
Tutorial lengkap mengatasi galat tersebut dapat anda baca disini. Selesai sudah proses instalasi ownCloud.
Akhirnya kita sampai juga di penghujung tutorial singkat ini, semoga dapat bermanfaat bagi siapapun yang membutuhkan. Sekian dulu tutorial singkat membuat Cloud Storage ini. Untuk mencoba menggunakannya silahkan anda lakukan sendiri karena menu-menu yang terdapat di ownCloud ini saya rasa cukup mudah dipahami. Silahkan lakukan eksplorasi lebih lanjut, dan jika anda ingin bertanya mengenai pembahasan tutorial ini silahkan ertanya melalui kolom komentar dibawah. Terima kasih, see you next time.
Referensi:
  • http://owncloud.org/install/
  • http://www.tukangubuntu.com/owncloud-3.html
  • http://www.howtoforge.com/your-cloud-your-data-your-way-owncloud-4.0-nginx-postgresql-on-centos-6.2

Sudah saatnya punya Cloud sendiri

Sudah saatnya memiliki atau menggunakan fasilitas Cloud sendiri atau dalam negeri, jangan bergantung dengan USA.


NSA Spying Harms U.S. Cloud Business

NSA Spying Harms U.S. Cloud Business
Electronic eavesdropping by the NSA (and, likely, other agencies of the federal government) has broad implications for law and civil liberties, but it also has economic effects—particularly on the IT industry. Specifically, U.S.-basedcloud service providers are suffering in their business with foreign clients as a result of the recent revelations of broad government spying. Such detrimental effects, however, may be just part of a larger drag on the cloud/Internet as a social and business medium.

U.S. Cloud Revenue to Suffer

Summarizing the situation, a report by the Information Technology & Innovation Foundation (ITIF) entitled How Much Will PRISM Cost the U.S. Cloud Industry states, “The U.S. cloud computing industry stands to lose $22 to $35 billion over the next three years as a result of the recent revelations about the NSA’s electronic surveillance programs.” In particular, the damage will center on business between U.S. providers and foreign clients, who may not want their data subject to the kinds of secret programs that the NSA’s Prism represents.
The report’s estimates regarding potential losses of U.S. business to offshore competitors range from 10% of the foreign market (about $22 billion in revenue from 2014 to 2016) to 20% (about $35 billion). The U.S. share of the foreign market for cloud services is forecast at roughly 80% in 2014, down from a hypothetical forecast of 85% absent revelations of the Prism spying program.
These numbers represent a tremendous loss in potential business, as Gartner predicts global spending on cloud computing will double by 2016, compared with just 3% growth for overall worldwide IT spending. And as Europe and Asia aim to better compete in the cloud, unbridled spying by the U.S. government becomes marketing leverage, particularly for clients that want to protect sensitive information.

Spying Programs Hampering an Open Cloud/Internet

Even if you believe President Obama’s recent statement that “We don’t have a domestic spying program,” the surveillance at a minimum extends to (likely all) communications originating from or destined for foreign locations. Clearly, as the ITIF report emphasizes, the result in the IT sector is a net detriment, with potentially billions in lost revenue (and, ironically, lost taxes for the financially underwater government). But it also reinforces a barrier between the U.S. and the rest of the world that is reminiscent of the Iron Curtain—an intangible wall (or quite tangible, in the case of the Berlin Wall) between the Soviet bloc and the rest of the world.
To be sure, some of the circumstances of the U.S. relative to the erstwhile Soviet Union are different, but the resounding irony of Russia granting temporary asylum to U.S. whistleblower Edward Snowden is unmistakable. Furthermore, attempts to limit travel through revocation of U.S. passports (as has been done to Snowden) create more of an impression of a wall that aims as much to keep people in as to keep them out. Calls to revoke even U.S. citizenship in some cases are also increasing in frequency—a dangerous tool in light of the nation’s policy toward so-called enemy combatants (an ambiguous phrase that could encompass any number of persons posing real or perceived threats).
The result in terms of the cloud (and/or Internet, depending on how these terms are defined) is the slow introduction of a digital national border that discourages interaction with foreign companies and individuals. This border may or may never become official, but the unblinking eye of agencies like the NSA combined with highly aggressive foreign policy makes the U.S. a less palatable place to do business—or as a waypoint for sensitive data. In a similar vein, many offshore financial institutions want little or nothing to do with American citizens owing to regulatory and investigative problems that the U.S. government has posed over the last decade or so.
And the list of hassles goes on, including the Patriot Act—a story that IDC flubbed up in late 2012. Reassurances that “end users’ concerns over foreign governments’ access to cloud data, particularly data stored in the U.S., are misplaced” were blown away this year by Snowden’s disclosures. IDC went on to say, “Many critics of cloud services say that the Patriot Act gives U.S. government agencies unprecedented access to information stored in the cloud. This concern is often heightened because the vast majority of leading cloud vendors are U.S.-based.” But the surveillance dragnet in the U.S.—even the parts that have been revealed and not just suspected—is too broad to continue accepting such reassurances. “Users need to ignore the Patriot Act scare stories.” Indeed.

But Is the Cloud Safer Elsewhere?

A legitimate question is whether NSA spying is a serious downside or just a marketing tool for foreign cloud providers. In particular, are foreign governments truly less invasive than the U.S. with regard to their surveillanceprograms? The U.S., thanks to its enormous federal budget and historical technology lead, may have the upper hand in sophistication and scope, but thinking that other governments don’t aspire to the same level of Orwellian voyeurism is na├»ve at best. So, to be fair to the U.S., other countries likely pose similar challenges with regard to privacy and data protection, although they may lack the same fervor and security-at-all-costs (or control-at-all-costs) mindset that drives many of these programs. But image is everything, and currently, the U.S. is the current target of anti-surveillance sentiment.

National Cloud

Growing suspicion among nations could potentially lead to a much less “open” Internet or cloud, where physical borders become digitized, thus creating more limits (whether legal or de facto) to business and other foreign interactions. To some extent, this may be natural: although English is the dominant language of the world (in influence, at least), there’s no reason to expect it to stay that way. How often do you visit websites or patronize businesses that don’t use it (or whatever your native language)? On the other hand, large multinational companies may exert their influence to maintain openness with regard to trading, particularly since they will have the resources to serve many cultures.

Conclusions

NSA spying hurts the image of the U.S. and will likely put a dent in the revenue of U.S.-based cloud providers. But some of that harm may be unwarranted given that many (if not all) other nations are no doubt pursuing their own domestic and foreign surveillance programs, although they may be less sophisticated. Unfortunately for U.S. providers, however, Edward Snowden’s revelations give foreign cloud service companies a marketing gimmick to lure business. And indeed, in some cases, the political climate of other countries may be (for the time being) more conducive to data privacy than the U.S.; in such cases, the marketing gimmick is a real draw.
For the U.S., however, its denial of spying programs (followed by reluctant admission) is a serious blow to its credibility, and IT companies will suffer for it. Whether policy will continue to build a virtual wall between insiders and outsiders remains to be seen, but for now, the damage may already be done for cloud providers.
Image courtesy of ElectronicFrontierFoundation

Related Posts :




Tips dan best practices dalam mengevaluasi dan menggunakan Data Center




Data Centered: Tips and Best Practices for Evaluating and Using a Data Center

Data Centered: Tips and Best Practices for Evaluating and Using a Data Center
As a number of events in recent times have clearly demonstrated, now more than ever, businesses need to be prepared for effects of both manmade and natural disasters.
In the event of a natural disaster, power outage or other disruption, the ability to preserve essential data and maintain business continuity is critically important. What was once the concern of a few select data-dependent industries and high-tech companies is now an issue for growing numbers of professionals representing an expanding range of industries. Whether it is a localized outage or a large-scale emergency, any interruption to business has the potential to be financially and operationally devastating, making business continuity preparation and disaster recovery planning a necessity for virtually all businesses.
As companies assimilate and adapt to new technologies and processes, establishing and maintaining business continuity practices becomes even more of a challenge. The speed with which information is exchanged has increased dramatically, and although new processing power and data management capabilities yield greater efficiency, they also introduce new challenges: tighter deadlines, increased expectations, new financial pressures and thorny logistical issues. The exploding popularity and growing ubiquity of cloud computing is perhaps the most prominent example of this double-edged trend, as remote data storage and backup at a data center offers both a compelling solution and a new set of trials for business continuity.
As a result, decision makers at companies large and small need to educate themselves about what to look for in a data center—including the technological components, infrastructure priorities and security standards that should drive their selection—to ensure their data center facility is reliable and their business information is safe and secure in any situation. The right data center and the right business continuity policies, practices and procedures will not only enhance a company’s ability to protect sensitive information and maintain operational continuity in the face of exceptional or emergency circumstances, but it will also enable that company to remain compliant in an increasingly robust and evolving regulatory environment.

A New Paradigm

The need for increased data security, mobility, flexibility and business continuity is closely tied to the acceleration of business processes and the corresponding acceleration of business expectations. Gone are the sticky-note reminders and phone messages stuck to the top of your desk—today, the clutter atop actual desks has been largely replaced by the electronic urgency of the figurative desktop and the immediacy of email and other electronic communications. There is no lag time: communication is expected to be immediate, and data is expected to be available 24/7. Engineering and technology have advanced to the point that there is no excuse not to have communication and data highly available from a resources standpoint. Today, it is just a matter of deciding to make it happen.

Mostly Cloudy

In the context of a faster and more demanding professional environment, the solution for many businesses is colocation/cloud computing, a concept that confers significant advantages in data management flexibility, access to information and security/business continuity. The formidable problems posed by a power outage, failed server or a cut data line can all be mitigated or avoided by having data stored and available in multiple locations. Today, companies are beginning to understand that if they do not maintain their critical data in several geographically disbursed locations, they simply are not representing their clients’ interests to the degree that they should. Remote data storage and backup is the new normal.
At the same time, most businesses are realizing that they can get more or better technology bang for their buck by outsourcing their cloud computing and remote storage/backup requirements. The recognition that a company does not have to own or operate the technology in house has opened up a whole new world and made cloud computing and remote access, backup and data storage available to many more businesses. This is good news both for the companies who want to have their data running and protected on top-tier equipment (with safe, secure and regular backup) and for clients who want to ensure that the stewardship of their information is top notch and meets their needs.

Analyzing Your Data Center Needs

Although there is a lot to think about when deciding what kind of data center you need and what kind of backup/protection program is right for your business, perhaps the most important piece of advice for any IT professional or business owner is this: do not pay for more than what you need. This all-too-common error is often the result of a decision that is made without thinking critically about how a data center’s services and technologies apply to the business model. There are two primary big-picture considerations that come into play when trying to determine what level of protection is right for you: your recovery time objective (RTO) and your recovery point objective (RPO)
  • Recovery time objective (RTO): The RTO is the amount of time that an outage will last—or, more specifically, how long can you afford an outage to last. The answer may be seconds, minutes, hours, days or weeks, but it should be evaluated in the context of your business operations and requirements. Many businesses do not take the time to answer this question with precision, and they end up taking a wild guess that leaves them either underprotected or spending more than they really must. RTO calculations should also consider what the customers expect/require, as well.
  • Recovery point objective (RPO): The RPO is the point at which the loss of data becomes a problem. Essentially, businesses must ask themselves if they can afford to lose data and, if so, how much data. The answer to this question will determine how often you need to be backing up your information, something that can be done daily, hourly or by the minute. The cost varies dramatically, making it all the more important to determine how much data your business can live without, and how much data can be reproduced internally in the event of a disaster or business interruption.

Business Continuity

Determining your RTO and RPO is an important first step in deciding what the contours of your business continuity plan need to be, as well as how aggressive you need to be with regard to data protection and backup planning. The range of options here is significant, both in the timeline for post-disaster recovery—which can entail a lag time of a day or a few hours to rebuild or reconstitute your information in a different data center—and in the mechanisms used to facilitate that process. There are a number of technology and procedural options (with corresponding cost considerations) for backing up your critical data that will affect your business continuity planning. A business can opt for synchronous data replication (essentially instant redundancy/backup in two places at once) or asynchronous replication, which includes a small lag time and is independent of the distance between facilities. In the event of a power outage or catastrophic interruption, some businesses may need to be instantly redirected for transfer so they can get up and running at another location almost immediately, while others (especially smaller businesses) prefer to select a more affordable recovery plan that requires a lag of a day or so while their data is “rehydrated” and they return to operational status. It all comes down to the expense, and to the reputation requirements and survivability of a business.

Data Center Facility Evaluation

Even the most thoughtful disaster preparedness or well-designed business continuity plan will fall short if the technical architecture, security apparatus and logistical support of the data center are not up to the challenge. Evaluating potential data centers is a detailed process that includes asking a number of pointed questions.
Is the facility at a strategically selected location (a site that is geographically favorable and geologically stable)? How protected is the facility/server room? Do security cameras monitor both the perimeter and interior of the complex to protect against theft, malicious activity and accidents? Do those cameras record, and how long are those recordings maintained? Does the facility boast sophisticated and redundant cooling systems, and can it provide an uninterruptible power supply in the event of brownouts, blackouts or service interruptions? Is there appropriate battery protection/backup that facilitates a coordinated shutdown instead of a hard crash that can potentially corrupt data? Are the facility’s systems monitored by software that tracks all equipment for warning signs like temperature spikes or system failures? Is the data center equipped with a next-generation gaseous fire-suppression system that will not unnecessarily harm infrastructure in the event of a fire? Is access to sensitive areas restricted and, if so, how? Is there a badge reader or an ID system? Are logs kept of all entries and exits? Are tours/inspections given on a regular basis?
If the answer to one or more of these questions fails to meet your expectations, the data center provider may not be the right fit for your business.

The Big Picture

Although IT professionals understand many of the technical aspects of data center evaluation and business continuity planning, the challenge for business owners and high-level decision makers is to improve their own understanding of these issues. It is an encouraging sign that more owners and operators are getting involved by educating themselves about the increasingly central role of technology in today’s business environment. A more sophisticated understanding of the risks and available solutions can help any responsible business take the critical steps required to mitigate risk—because before you can select the right data center or develop and implement a business continuity plan, you need to have an informed conversation about priorities and processes.
In addition to security, peace of mind and financial protections, the selection of a quality data center and the implementation of an effective plan also add value in another way: transparency. Businesses that can easily and efficiently produce compliance documentation save themselves enormous amounts of time and money. These days, successfully navigating the regulatory landscape demands more than just checking boxes: It requires a sophisticated understanding of what it means to be compliant. Frequently, it comes down to the experience of the technical group designing the systems. Does it truly understand what ambiguous terms like “physical security” actually mean in practice?
Even though selecting the right data center is not the only step toward safeguarding your data and positioning your business for success in an increasingly virtual world, it is a critical piece of the business continuity puzzle.
Leading article image courtesy of 123net

About the Authors

data center
Phillip Curton
data center
Ronald Redmer
Philip Curton and Ronald Redmer work for NDeX, a Farmington Hills, Mich.-based leading provider of processing and technology services to law firms nationwide. Phil serves as private cloud services director of NDeX, and Ron serves as chief information officer. Contact Phil atpcurton@ndex.com and Ron atrredmer@ndexteam.com.

Fokus datacenter masa depan adalah efisiensi energi


Diversified industrial manufacturer Eaton will highlight key trends in energy efficiency that will shape the future of data center design at the upcoming Eaton Hong Kong Data Center Solution Tech Day. With the theme of “Keep Powering Forward”, it will be at the Four Seasons Hotel in Hong Kong on August 8, 2013.
Alexander M. Cutler, chairman and chief executive officer of Eaton, will preside over the opening session of the Tech Day and welcome customers and industry leaders from the Asia Pacific region. The one-day event will focus on megatrends shaping the energy management business, key challenges in the data center industry, and how proven Eaton solutions will affect future data center designs.
“Eaton is hosting this event in Hong Kong because it is a strategic financial services and regional information and communications technology hub with a robust and world-class data center industry,“ said Curt Hutchins, Eaton’s Asia Pacific president. “Companies perceive data centers as a strategic IT asset and a building block for business successes, so reliable, efficient, safe and sustainable energy management is critical. This Tech Day will offer a comprehensive look at how Eaton’s power management solutions can help overcome today’s energy challenges by becoming more efficient.”
At the event, Ivo Jurek, president of Eaton’s Electrical business in Asia Pacific, will provide an in-depth introduction of the company’s electrical products and solutions. John Collins, Eaton’s global segment director for data centers, will discuss key trends that are affecting data center design. Each will set the stage for a series of technology seminars in the afternoon on:
  • Energy advantage architecture
  • Air flow management solutions and their implementation in new and existing data centers
  • Meeting the critical needs for power distribution in data centers
  • Fuse technology and applications
  • Future energy storage for uninterruptible power systems (UPSs)
  • Complete cable management solutions for lowest total installed cost
The Tech Day will conclude with a panel discussion in which presenters and experts from the region it will discuss key ideas and innovations that are leading the power industry and how data center designs need to be changed to meet future business requirements.
“We believe that these discussions and presentations will offer a glimpse of the future of data centers. With Hong Kong being promoted as a data center hub for the region, it becomes crucial for Eaton as a power management leader to highlight how tomorrow’s data centers will be shaped by current trends and innovations in power management,” added Hutchins.
Eaton is a diversified power management company providing energy-efficient solutions that help our customers effectively manage electrical, hydraulic and mechanical power. A global technology leader, Eaton acquired Cooper Industries plc in November 2012. The 2012 revenue of the combined companies was $21.8 billion on a pro forma basis. Eaton has approximately 102,000 employees and sells products to customers in more than 175 countries. For more information, please visit www.eaton.com.

Kritikal aktifitas di data center produksi




Downtime in your data center can be costly. But failing to adequately maintain your facility means you’re in for unexpected downtime—which can be much worse than planned downtime. If your data center receives much lighter traffic at certain times of day or certain times of the year, scheduling a service break during those off hours is one possibility. For instance, if product sales are virtually nonexistent during late-night and early-morning hours, those times are a good opportunity to put operations on hold while you perform needed maintenance or other critical work.
But what if your facility hosts business transactions or provides services steadily, 24 hours a day and year round? In these cases, even short discontinuities in resource availability can annoy customers and drive business to competitors. When “always on” service is a business requirement, maintenance and other critical work on the data center cannot disrupt normal operations. Performing this work on a live data center, depending on the scope of the tasks at hand, can be a tremendous challenge. What if, for example, you need to upgrade or repair your uninterruptible power supply (UPS) deployment?
Working on a live data center takes careful planning and more than a little courage. The following are some tips to help reduce the risks associated with this kind of critical work and to keep IT resources available to both internal and external customers throughout the process.

Plan Ahead or Fail

More than anything else, the key to managing critical work in a live data center is planning. No one in his or her right mind starts replacing UPSs, for instance, without first either shutting everything down or carefully reviewing the potential contingencies if the data center is to continue running. Planning, however, is more than just a matter of scheduling: it requires a comprehensive strategy for dealing with even the unexpected.
The planning phase should accept input from all parties that could be affected, particularly should the work run into complications. Generally, the more isolated or peripheral the system, the lower the risk that a failure or other complication will affect a wide swath of the company, customers and contractors. If you’re planning maintenance or an upgrade for a critical central system, like UPS, the consequences of an error or unexpected event are much broader.
Scheduling for critical work on a live data center should take into account the availability of all relevant parties. If a particular contingency necessitates bringing in an electrical contractor, for instance, ensure that the contractor is either present or available at the time of the work. Data center managers must “work with the facilities departments to coordinate the maintenance schedules for the supporting infrastructure with their asset-deployment activities,” said Kevin Lemke, Product Line Manager of Row and Small Systems Cooling for Schneider Electric’s IT Business. In addition, ensure that your schedule leaves enough padding to allow for unexpected delays. Cramming successive stages too closely together can jeopardize the entire project; for example, a delay at one stage could push a subsequent stage out to the point that a contractor involved in the effort becomes unavailable. Data center managers should give their employees credit for their competence—as well as some leeway for the unexpected.
Extensive planning for critical work is a requisite for consistent success. Ideally, however, preparing for critical workon a live data center should go beyond one-time planning: it should begin at the design phase.

Designing for Live Data Center Upgrades and Maintenance

If the system you’re upgrading or repairing is a single point of failure in your data center, a live fix is all but impossible. Thus, this kind of critical work is most feasible in cases where the design phase of the facility looks ahead to maintaining uptime even when this work is performed. Victor Garcia, Director of Facilities at Brocade, notes that to maintain or upgrade a live data center, it “has to be designed for and planned in advance. Depending on the level of uptime required, either N+1, N+2 or 2N designs need to be incorporated into the plans and operations so that uptime can be achieved while performing maintenance.” This redundancy is critical: not only does it avoid single points of failure, which are a bane of data center uptime, but it enables live maintenance. Replacing a redundant UPS while keeping the facility running, for instance, is far easier than doing so when you have just a single UPS!
In addition to the initial design, tracking changes made to systems is critical. Despite extensive planning, critical work can land in serious jeopardy if the configuration assumed in the plans turns out to be different from what’s discovered because no one kept adequate records over time. “From an operational perspective, having a change-control process where any changes—whether IT or facilities related—are reviewed cross-functionally to ensure that none of them put the data center at risk,” said Garcia. In some cases, however, certain changes are potentially problematic. Garcia recommends mitigation or contingency plans in such cases to prevent downtime once the work starts.
Unfortunately, even if you’ve implemented a change-management policy, prudence demands verification of relevant system configurations before you begin critical work. Although doing so may cost some extra time and effort beforehand, you must weigh that perhaps unnecessary effort against the costs that your business might incur should you run into unexpected conditions once work begins. And because critical work requires careful scheduling, such an occurrence can easily throw off large chunks of the schedule, potentially ruining the entire plan.

Consider Peripheral Effects on Operations

Focusing on electricity, airflow and network connectivity are critical when performing maintenance on a live data center, but data center managers should be careful to remember other, more indirect aspects of how their work might affect operations. If the work involves some kind of physical construction, for example, Swedish construction company Skanska recommends sealing the work area with plastic to prevent dust from reaching IT and other sensitive equipment. Furthermore, depending on the location, workers may need to wear booties or other coverings to prevent dust from hitching a ride into the data center proper.
In addition, temporary rearrangements of equipment or the presence of certain gear—if large enough—can cause changes in the normal airflow of the data center. The result can be dangerous hot spots that could lead to system failures. Depending on the scope and budget of the project, one option is to employ computational fluid dynamics (CFD) to model the airflow. Although doing so may or may not be practical for intermediate stages of the work, it can deliver solid returns if applied during the planning phase for the project—mainly if the work involves a new cooling system or otherwise altered airflow dynamics, such as through rearrangement of server rows.

Practice Where Possible

As far as possible—and practical, given the associated costs in employee time and so forth—conduct a practice run of your plan. The more critical the systems you’re working on, the more beneficial practice can be in avoiding downtime. In addition to identifying potential trouble spots that you might not have considered or might otherwise been unaware of, a dry run can help employees and other involved parties gain confidence in what they’ll be doing. It will also help data center managers govern the process more smoothly.

Safety

Uptime should always be second to employee and contractor safety. A dangerous shortcut during the process might have the potential to save the entire project, but it can also put lives at risk. Practically speaking, a serious injury or death in the data center is likely to cause more trouble than some downtime because of a problem that arises during the project. From a more compassionate perspective, it’s better to lose some business than to risk the lives of those working in the data center.
Safety considerations may not improve uptime, but they can improve morale and encourage responsibility among employees and data center managers alike. They can also help avoid regulatory hassles. Of course, there’s always a balance to be struck: it’s easy to go overboard with safety to the point of foolishness. Usually, however, a data center manager with some common sense will be able to identify areas of critical work on the live facility that require more care than others.

Learn From Past Experiences

Maybe your last effort at maintenance led to downtime. The only unforgivable failure is the failure to learn from the experience. If you’ve needed to perform live maintenance or upgrades in the past, you’ll need to do them again in the future. Even if your last effort wasn’t a resounding success, you can glean information from it that will help you avoid similar difficulties in future projects.
In addition, experiences gained during maintenance projects—whether successful or not—enable opportunities to prepare beforehand for future projects. For instance, a data center manager might consider implementing a change-management policy to keep better track of the equipment configurations in the facility.

Conclusions

The central facet of any project involving critical work on a live data center should be planning. The more detailed the plan, taking into consideration likely contingencies that could arise during the project, the more likely staff and contractors will execute it successfully.
Apart from simply planning ahead of particular projects, however, companies should plan from the very start: the design phase of the data center. Appropriate redundancy in critical systems not only avoids single points of failure, it enables maintenance and upgrades while the data center is still running. Brocade’s Victor Garcia suggests, “From a design standpoint, future-proof your design to the next level by thinking through each discipline: what if you had to provide one more level of redundancy or what if your densities or number of racks had to increase, which increases your total system load. Make sure you can add an extra set of equipment from a space perspective, and being able to tie it into the distribution system without interruptions, for example installing maintenance bypass switches, bypass valves or isolation valves, putting in a tie breaker or simply reserving space in the mechanical or electrical room for expansion capabilities.
This kind of planning and ongoing awareness of data center design and infrastructure not only enables scalability, it enables critical work that doesn’t interfere with uptime. If your customers, whether internal or external, demand always-on access to IT resources, you can expect to face live maintenance and upgrade projects. By taking some steps beforehand—including but not limited to detailed planning—you can avoid the high costs of downtime while improving your data center.
To read more articles from this August’s DCJ Magazine please click here

About Jeff Clark

Jeff Clark is editor for the Data Center Journal. He holds a bachelor’s degree in physics from the University of Richmond, as well as master’s and doctorate degrees in electrical engineering from Virginia Tech. An authorand aspiring renaissance man, his interests range from quantum mechanics and processor technology to drawing and philosophy.