Data Center is our focus

We help to build, access and manage your datacenter and server rooms

Structure Cabling

We help structure your cabling, Fiber Optic, UTP, STP and Electrical.

Get ready to the #Cloud

Start your Hyper Converged Infrastructure.

Monitor your infrastructures

Monitor your hardware, software, network (ITOM), maintain your ITSM service .

Our Great People

Great team to support happy customers.

Saturday, April 20, 2013

10 alasan untuk tidak menggunakan virtualisasi

Top 10 Reasons Not to Virtualize

November 19, 2012

While this column normally discusses the benefits of virtualizing your workloads, sometimes it's important to know when to Just Say No.
Cesare Garlati, VP of mobile security at Trend Micro, agrees. At the recent RSA Conference 2012 in London, he suggested ten different situations when the right thing to do is turn your back on your hypervisors and run applications as nature intended — on good old-fashioned physical server iron:
1. If going wrong is not an option. In other words, if you have something that works and needs to keep working, then what's the point in introducing the complexity and unknowns of server virtualization – and thereby risking downtime?
2. When licenses don't allow it. Some applications' licenses simply don't allow them to be run in virtual machines. You don't want to be doing anything that contravenes the licenses your company agreed to (you did read the license before opening the packaging, didn't you?), so that means server virtualization is out of the question in these cases.
3. With high I/O apps, specialist hardware or dongles. Some applications with high I/O characteristics like databases (or anything that requires tuning to work with the underlying server hardware), grid or distributed SMP applications that need high speed interconnects, graphics intensive apps, or applications that require hardware cards or dongles are a no-no when it comes to virtualization. Don't even think about it.
Virtual Speaking4. When time synchronization is critical. Virtual machines run their own clocks, and that means the time they keep will diverge from the host server's clock over time. If very small divergences are critical — as may be the case with financial real-time trading or some industrial control systems — then stick to physical systems.
5. When you don't have the budget to do it right. Server virtualization may save money, but to do it properly takes some money, too. That means there's no point going in to a virtualization project half-cocked: if you can't pay for the tools and management systems required to support virtualization technology then you are better off leaving it alone.
6. When capacity is limited. Despite the improvements made in recent months and years, there's no denying that a VM running on top of a hypervisor is not going to perform as fast as a physical machine running the same OS and applications directly.
So if your servers are currently running at pretty much full capacity then there's certainly little point in adding a hypervisor to the equation. You could always buy more servers to run hypervisors on — but virtualization is meant to enable you to cut down on your physical servers, not force you to buy more.
7. When you need to manage encryption keys. Key management is easy on physical servers, but the same systems won't work with virtual workloads that move from physical machine to physical machine. There are solutions and workarounds, but you'll have to investigate them before you can carry out secure key management on VMs.
8. When high availability is baked in to the application. Virtualization platforms like VMware's offer high availability services of one kind or another for VMs. So far so good. But older mission-critical apps may have HA (High Availability) built in, and that may not work when virtualized.
For example, Microsoft Cluster Service with a shared disk will break in environments that allow VMs to move around automatically. The upshot, concludes Garlati, is that if your VM platform provides HA then you better make sure that your apps don't — and vice versa.
9. To save money on VDI. This is a simple one really. Garlati insists that despite the fact that there are plenty of benefits to VDI – better security, for example – you shouldn't expect it to save you money. If that's your primary objective, don't virtualize your desktop.
10. When there's a risk of a virtualization loop. If you try to virtualize the components of your virtualization platform, you could end up in trouble. For example, if your virtualization platform and hypervisors rely on AD (Active Directory) and DNS, and your AD and DNS servers are virtualized, then your hypervisors won't start as they are its waiting for AD, and AD won't start as it's waiting for the hypervisor. It's a viscous circle you should avoid at all costs.
So there you have it — ten good reasons why not to virtualize. I promise we'll get back to the positives of server virtualization in the next column…
Paul Rubens is a technology journalist and contributor to ServerWatch, EnterpriseNetworkingPlanet and EnterpriseMobileToday. He has also covered technology for international newspapers and magazines including The Economist and The Financial Times since 1991.

11 langkah untuk roll-out Virtual Desktop Infrastructure

11 Steps to Roll-Out a Virtual Desktop Infrastructure

Infographic that Focuses on Important Steps to Successfully Deploy a VDI Environment.
A Virtual Desktop Infrastructure (VDI) roll-out needs to be carefully tested and planned so you’re ready for the success when it is time for the full VDI deployment. It’s more than a “flick-the-switch” method. You want to make sure you don’t cause confusion among users or harm productivity. By implementing virtual desktops slowly you can gather metrics that will help you have a successful VDI deployment.

Berapa seharusnya biaya desktop VDI ?

What Should A VDI Desktop Cost?
A small business should be able to configure a viable 250 desktop VDI installation for less than $150 per desktop.
By George Crump,  InformationWeek 
April 12, 2013

Recently I met with an organization's IT team after they had completed the initial rollout of their virtual desktop infrastructure project. The VDI install went quite well and desktop performance was by their description "acceptable." Although I am not convinced that the users think "acceptable" is, well, acceptable, but that is a story for another entry.

What jumped out at me in our meeting was the cost per desktop. Based on the number of desktop instances they felt that their "pod" of host and storage could support, the average cost per desktop was about $500 to $600, prior to the actual device that sits on the desktop. AdTech Ad

Obviously the device has some bearing on the cost per desktop, but this company was counting on the Bring Your Own Device (BYOD) trend to alleviate all of those costs. However, the company reimbursed for the device brought in by employees, so it really should have been factored into the overall cost. But for now, let's leave the cost per desktop at $500 to $600. I think that cost is still way too high for broad adoption of VDI to make sense.

What Is Driving The VDI Cost Per Desktop?
The key factor that drives up the cost per desktop is simple. How many desktops can you support per host? Obviously 2,000 desktops per host is going to be far less expensive per desktop than 1,000. What is the big limiter to desktop density? Storage -- primarily storage performance, not storage capacity. Capacity issues, assuming your storage has the performance capabilities to handle the dynamic write nature of thin provisioning, golden masters and linked clones, have largely been solved thanks to the efficiency of these technologies.

It is important to note what is not the problem -- processing power. In almost every VDI environment we have studied there is more than enough processing power to increase desktop density. The bottleneck almost always is somewhere in the storage infrastructure.
Storage performance is a problem because the higher the density of desktops the more random the I/O becomes. Also, as mentioned above, in a persistent desktop VDI strategy, the need to handle the dynamic writes caused by thin provisioning, golden masters and linked clones also puts stress on the storage infrastructure.

This puts IT in an awkward position. They can either solve the storage performance problem with an expensive, high-end storage system or stand up more hosts as the bottleneck is reached. The most common is for IT to create "pods" for their VDI rollout. Each pod is a dedicated number of hosts connected to a dedicated storage device. The aggregate use of the CPUs in that pod is rarely above 25% but the storage system is running at full tilt. Adding more "pods" to increase the number of supported desktops becomes an expensive and inefficient solution that increases the cost per desktop.

Ending The VDI Vs. Storage Battle
The answer from the storage community has been to throw flash at the problem. And basically throw flash everywhere: in the server, in the network and on the storage system itself. We don't buy into this strategy of haphazardly applying flash technology, but each technique has an advantage and applying the right technique for your environment can make a significant difference and most importantly bring down the per desktop cost of a VDI infrastructure.

As we saw in our recent lab report "Making Storage the VDI Solution, Not the Problem", where DRAM in the server was leveraged we projected a significant increase in the number of desktop instances per host, a reduction in the shared storage expense and a huge decrease in the cost per virtualized desktop. Based on that study and a few more that are still in the works we suggest that the new target per virtualized desktop should be less than $250 while delivering better performance than what is available to a standalone desktop.

This type of technology makes VDI justifiable from a CapEx perspective in addition to its traditional strength as an operational saver. It also helps VDI make more sense for organizations that traditionally considered themselves too small for a VDI project. We project that a small business may be able to configure a viable 250 desktop VDI installation for less than $150 per desktop.

In my next entry "Are You Delivering The Right VDI IOPs" we will examine the new reality in IOPs per desktop and see if "acceptable" really is acceptable.

Virtual Desktop mengarah ke mobility

Desktop virtualization companies take on mobility

Recent changes among vendors show that the desktop virtualization market is shifting slowly toward a focus on mobility.
A few weeks ago, Citrix announced it is canceling Citrix Synergy in London this October in favor of a series of smaller mobility shows scattered around Europe. I believe this is the latest indication that Citrix is altering its mind-set from desktop and application virtualization to mobility. You could call it "Citrix 2.0."
This doesn't mean Citrix is moving away from desktops (it clearly occupies a huge role in that market and is still innovating), but it does reinforce the perception that desktop virtualization companies are spreading their wings.
Look at all the solutions Citrix has, outside of XenApp and XenDesktop. The company has Podio, ShareFile, all the Citrix Online components and, of course, CloudGateway. Plus, it's taking advantage of Citrix Receiver to the point where that product suite has become a platform all to itself. In just a few years you'll see people deploy Citrix Receiver to mobile devices without even considering it to be a remote desktop product but simply a mobile application and information management tool.

Vendors take on mobile

There are many other desktop virtualization companies adopting a decidedly mobile focus as well.
VMware recently repackaged its Horizon Suite as an end-user computing suite that includes physical desktop, virtual desktop and mobility management tools. VMware manages mobile devices in one of two ways: On some Android devices it can actually virtualize the OS and provide two distinct containers, one for work and one for play. On Apple iOS devices, it creates sandboxed containers for wrapped applications, much like other mobile application management offerings do.
Mokafive, a company known for its layering and client hypervisor management tools, also released a mobility package aimed at mobile information management (which, I might add, is expertly poised to do mobile application management also, should the company decide to go down that road).
We see companies such as Dell (plus Wyse Technologies and Quest Software before them) acquiring software to manage mobile devices and going so far as to include those device management features into Microsoft System Center Configuration Manager. All of a sudden, we can manage our devices from the same place that we manage our physical and virtual desktops.


Industry shifts from Terminal Services to VDI
Desktop virtualization terms you need to know
Desktop as a Service options grow
What does all this mean to the desktop virtualization market?
Our worlds are blending in ways that we couldn't predict a few years ago, but our roles aren't changing much -- even if the technology is. We still primarily care about delivering applications to devices. We still care that users access data and websites securely. These principles apply to the same group of people, regardless of whether the device used to access the apps and information is on the desktop, in the data center or in their pocket.
A high-ranking Citrix employee once told me to get to know CloudGateway as well as I know XenApp and XenDesktop. Even if you're not responsible for applications, data or security on mobile devices today, change is in the wind, and they could fall onto your plate any day now. Perhaps delivering Windows desktops to mobile devices is the way you're going about it now, but when it comes time to find a more appropriate methodology, you'll certainly need to be involved.
If you don't believe me, just look to the desktop virtualization companies that you care about today. There are the ones that I've mentioned already (Citrix, VMware, Mokafive, Dell), plus other big names including AppSense, Symantec and RES Software. Even antivirus companies such as McAfee and BitDefender are pivoting into this space.
If you want more evidence, be sure to watch the keynote at Citrix Synergy in Los Angeles next month. What used to be a 90-minute showcase of new Windows desktop virtualization features has become a cloud and mobility expo. There is still talk of Windows desktops, but the energy in the room comes from mobility topics, and I expect mobility to bleed further into our desktop virtualization lives, too.

Masa depan DESKTOP adalah virtual desktop..

Masa depan DESKTOP adalah virtual desktop..

desktop virtualization

Desktop virtualization is the concept of isolating a logical operating system (OS) instance from the client that is used to access it.
There are several different conceptual models of desktop virtualization, which can broadly be divided into two categories based on whether or not the operating system instance is executed locally or remotely. It is important to note that not all forms of desktop virtualization involve the use of virtual machines (VMs).
Host-based forms of desktop virtualization require that users view and interact with their desktops over a network by using a remote display protocol. Because processing takes place in a data center, client devices can be thin clientszero clientssmartphones, and tablets. Included in this category are:
Host-based virtual machines: Each user connects to an individual virtual machine that is hosted in a data center. The user may connect to the same VM every time, allowing personalization, (known as a persistent desktop) or be given a random VM from a pool (a non-persistent desktop). See also: virtual desktop infrastructure (VDI)
Shared hosted: Users connect to either a shared desktop or simply individual applications that run on a server. Shared hosted is also known as remote desktop services or terminal services.  See also: remote desktop services and terminal services.
Host-based physical machines or blades: The operating system runs directly on physical hardware located in a data center.
Client-based types of desktop virtualization require processing to occur on local hardware; the use of thin clients, zero clients, and mobile devices is not possible. These types of desktop virtualization include:
OS streaming: The operating system runs on local hardware, but boots to a remote disk image across the network. This is useful for groups of desktops that use the same disk image. OS streaming requires a constant network connection in order to function; local hardware consists of a fat-client with all of the features of a full desktop computer except for a hard drive.
Client-based virtual machines: A virtual machine runs on a fully-functional PC, with ahypervisor in place. Client-based virtual machines can be managed by regularly syncing the disk image with a server, but a constant network connection is not necessary in order for them to function.

Friday, April 19, 2013

Ketika market PC menurun, perangkat touchscreen mengambil alih...

As the PC market turns, touchscreens start to take over

Summary: Samsung's Chromebook has been at the top of Amazon's list of bestselling notebooks for several months. But a closer look at the rest of that list reveals some interesting facts about an industry in transition. Most notably, touchscreens are finally starting to take off.
Google’s Chrome OS isn’t exactly setting the online universe on fire, according to the latest numbers from NetMarketShare. In fact, Chromebooks are so lightly used that they don't even appear on the latest reports from the web metrics company.
When I wrote that news earlier this week, I heard two reactions, for the most part. The first was, "This surprises you?" The second was: "But that can’t be. The Samsung Chromebook has been at the top of Amazon’s bestselling laptops list for months!"
Indeed it has. That apparent contradiction surprised me, too, so I decided to take a much closer look at that Amazon list. I came away with a plausible explanation for Samsung’s success and some insights into the PC market as we head into midyear.
First, a little background. As a book author, I know a thing or two about Amazon’s bestseller lists. They’re based on complex (and highly secretive) algorithms that blend long-term sales with short-term momentum. So a product that sees a spike in sales in a single day can move impressively up the charts for a day or two, and then drop quickly back to its normal slot. But the products that stay atop the charts are those that sell steadily over time.
By that measure, there’s reason to congratulate Samsung for the Chromebook’s performance. Its tenure at the top means it has been selling consistently over time. So what’s the secret of its success?
Let’s start with the most obvious attribute: its price. At $249, the Samsung Chromebook is the second-cheapest device on the Amazon list. In fact, when I copied the list into a spreadsheet and sorted by price, lowest prices first, Chromebooks magically rose to the top.
Two of the top five notebooks are dirt-cheap Chromebooks. When you sort the 100 laptops on the list by price, only one Windows-based machine, the Acer Aspire One, managed to sneak into the bargain basement. With the Samsung getting excellent reviews for its build quality, at a price of $249, it passed the “What the hell?” threshold for many gadget buyers.
But I found the rest of the list much more enlightening. Here’s the short version:
  • Apple’s MacBooks are very popular indeed.
  • Touchscreens are making inroads into the mainstream.
  • Cheap PCs are still the no-profit lifeblood of the industry.
Let’s dig in.
For starters, there really aren’t 100 discrete devices in the Amazon top 100 list. I threw out 10 of the entries on the list that were available only from third-party sellers, not fulfilled by Amazon. This group included three ancient Apple iBooks powered by G4 CPUs. It also included listings for five equally antique refurbished Dell machines. After excluding those listings, we end up with a total of 90 entries in the Formerly Top 100 list.
And there are a lot of duplicates on that list. The Samsung Chromebook comes in a single configuration, but many of the other entries on the list represent the same device with a different CPU or memory, in a different color, or with a slightly different model number.
One could, in fact, make a plausible case that ASUS deserves the top spot on the list with its amazingly inexpensive low-end touchscreen notebook powered by an Intel i3. The ASUS X202E appears in the #10 spot on the list, but its siblings, the silver and pink units with the same model number and the identical device sold as the Q200E, appear on the list as well. All told, this machine appears five times. If those sales were consolidated, it would certainly move up the charts - perhaps all the way to the top.
I found a total of six Apple MacBooks in the top 100. They paint a picture of Apple’s amazingly successful sales strategy. Create a manageable number of models, build them very well, slap a premium price on each one, and collect the greenbacks.
If you sort the list by price in reverse order, MacBooks float to the top of the list. All of the 12 Mac models on the bestseller list were among the 20 most expensive laptops you can buy at Amazon. Only one had an actual selling price of (barely) under $1000. Collectively, they represent only six models: the 11- and 13-inch MacBook Airs, and the 13- and 15-inch MacBook Pros, with and without Retina displays.
And then there’s the incredibly diverse Windows laptop lineup.
When I combined all the duplicate entries, I found a total of 46 Windows-powered devices on the bestseller list. Here’s the breakdown:
  • Only 2 were running Windows 7
  • 32 were running Windows 8 on conventional notebook form factors
  • 12 were running Windows 8 with touchscreens
That middle group is basically the strip mall of PCs: ho-hum, mostly heavy lookalike devices at price points that make you wonder how the OEMs can make a dime of profit. Of that group, 56 percent were priced at $500 or less and 88 percent were $700 or less.
But if you’re looking for signs of life, look at the list of touchscreen devices, most of them fairly recent additions to the bestseller list.
  • Acer: Aspire V5
  • ASUS: Taichi Convertible Ultrabook; VivoBook S400CA and VivoBook S500CA; Q200E/VivoBook X202E
  • HP: Envy X2 convertible
  • Lenovo: IdeaPad Yoga 13; Thinkpad Twist; ThinkPad X1 Carbon Touch
  • Samsung: ATIV Smart PC 500T; ATIV Smart PC Pro XE700T
  • Sony: VAIO T Series
On average, the touchscreen devices sold for $802 each. By contrast, the non-touchscreen devices sold for $515. Part of that is the current premium price for touch-enabled displays. But as volumes go up, that component price should go down, making touchscreens more popular.
What I found most fascinating about this list were the Lenovo entries. In the recent dismal Q1, Lenovo nearly hit the top of the worldwide PC sales charts. And it’s not doing it with just cheap PCs. The Yoga 13, ThinkPad Twist, and X1 Carbon Touch are genuinely innovative designs, sold at premium prices.
We are in a time of transition in the PC industry. By the end of the year, I predict this list will look very different indeed.
Topics: PCsAppleGoogleMicrosoft


Ed Bott is an award-winning technology writer with more than two decades' experience writing for mainstream media outlets and online publications.

Thursday, April 18, 2013

Monitoring Jalan Tol dengan WebNMS

WebNMS Road Infrastructure Manager

Highways span along the length and breadth of the country, and efficient management of the passive assets along the highways is vital. WebNMS rolls out a solution Road Infrastructure Manager to monitor the health of critical highway systems like Emergency calling booths, Digital sign boards, Weather stations, and Toll collection booths.
Digital sign boards can be made interactive and be tailored to display messages based on real-time information pushed from a central location. The Traffic information concerning the number of vehicles flow based on its category are ticked instantaneously and accurately.
The wealth of information is captured by sensors, and based on the weather conditions, the operators can alert the travellers through the digital sign boards. Also the faulty Emergency booths are alerted along with the accurate location for servicing.
WebNMS Road Infrastructure Manager ensures:
  • Efficient utilization of highway passive assets
  • Proactive diagnosis of fault in Emergency Booth
  • Improved interaction between passive assets and operators
  • Eradication of accidents caused by unfavorable weather conditions
  • Accurate monitoring of Traffic flow information in Toll gates

Monitoring Power Grid / SUTET dengan WebNMS

WebNMS Power Grid Monitoring Solution

The massive growth of power transmission network and transmission capacity, has spurred complexity and risk in managing the power grids. WebNMS offers an out of the box solution to tackle the challenges of power infrastructure and grid size. WebNMS Power Grid Monitoring solution is a reliable intelligent solution to monitor complex electrical grids in real time. The electrical grids are monitored by a series of sensors at regular intervals and the data fetched by RTU are passed on to WebNMS to enable the operators in making effective decisions. The sensor captures the grid disturbances like grid leakage and grid breakage accurately and thus improves energy efficiency and stability of power sector.
WebNMS Power Grid Monitoring Solution ensures:
  • Proactive diagnosis of power leakage or breakage
  • Eradicates distribution loss due to theft
  • Managing transmission system more effectively
  • Improves interaction between energy providers and consumers
  • Manages peak demand and averts blackouts
  • Optimal utilization of existing equipment

Architectural Diagram

Power Grid Monitoring

Product Features

Grid Monitoring
  • Monitors the state of power lines
  • Monitors leakages and breakages of power lines
  • Monitors the devices in distribution substations
Grid Security and Surveillance
  • Captures interconnected line losses
  • Captures electricity theft
  • Captures distribution and transmission system stress
Energy Dashboard
  • Reports on overall power-usage pattern
  • Reports on improper functioning of safety equipment
  • KPI reports on energy efficiency across sites/regions
Alert System
  • Alerts on maintenance of assets like breakers, switches, transformers, etc.
  • Alerts during power leakage due to increased sub-threshold currents
  • Alerts during power breakage due to adverse weather, accidents, etc.
Integration with Maps
  • Integration of map allows locating the power leakage and breakage accurately and thus saves the manual time and effort in figuring the pain point.