Data Center is our focus

We help to build, access and manage your datacenter and server rooms

Structure Cabling

We help structure your cabling, Fiber Optic, UTP, STP and Electrical.

Get ready to the #Cloud

Start your Hyper Converged Infrastructure.

Monitor your infrastructures

Monitor your hardware, software, network (ITOM), maintain your ITSM service .

Our Great People

Great team to support happy customers.

Saturday, July 18, 2015

Mungkinkah monitor SDN dengan PRTG ?



Whether we’re talking with the press or with analysts, the acronym SDN comes up again and again, combined with questions regarding PRTG Network Monitor and Paessler’s position on the topic. Even our clients are continuously confronted with the matter and ask us what SDN actually is, whether they need it and if PRTG supports it. It’s time to shed some light on SDN and do a little explaining.

What Is SDN?

Simply put, SDN is network virtualization. SDN stands for Software Defined Networking and indicates decoupling of the so-called control plane, or control level, from the data plane—the level on which data is moved around and on which hardware (switches, routers, etc.) is located, in addition to raw data. The control plane communicates with the data plane via OpenFlow protocol, for example, which is managed by the trade organization Open Networking Foundation. The application plane is a level above the control plane and is where applications run.

SDN - Software Defined Networking

Why SDN?

The following are reasons for—and advantages of—introducing SDN: 
  • Networks are more complex than ever. Configuring network devices requires a lot of effort—and usually has to be done by the administrator, by hand. SDN claims to be able to reduce this effort dramatically.
  • Virtualization, Big Data, Internet of Things, Cloud Computing, BYOD: almost all big IT topics that arose in the past few years increase data traffic significantly and pose new challenges and requirements regarding planning and coordinating data streams in a network. SDN boasts of having the solution to this as well.
  • Another aspect that is attributed to SDN is a kind of birds-eye perspective on the network. The idea behind it is a central 'intelligence' that is capable of seeing the big picture and is thus able to control and optimize data streams better and more efficiently.
  • Thanks to the separation of the control function and the data level, single devices can be better optimized for their individual tasks. This means few, high performance servers for control and slim, 'dumb' switches on data level. This is supposed to enable significant savings on energy and hardware costs.

Where Is SDN Implemented Today?

According to a Gartner survey from December 2014, SDN was implemented on a significant level in 7% of the surveyed companies. Taking Gartner's focus on large corporations into consideration, as well as the fact that SDN is currently only plausibly affordable and advantageous for large companies, implementation in small and medium sized companies is probably much lower. According to Gartner, 10,000 companies will have implemented SDN by the end of 2016. This number seems high at first glance, but is quickly put into perspective when one realizes that in Germany alone, there are more than 3.5 million companies, 330,000 of which have more than 10 employees.

What Is Delaying the Spread of SDN?

Three main factors stand in the way of SDN expanding more quickly:  
  1. Lacking market consolidation
    Currently, many manufacturers with sometimes contradicting concepts are trying to establish their position in the SDN market. Cisco is a big player in network hardware and is pushing its own standard "Open Network Environment" (ONE) with a tight coupling of the control plane and the underlying (Cisco) hardware. By contrast, virtualization specialist VMware argues that SDN is already part of a comprehensive (VMware) virtualization package. Most companies are wisely waiting to see which concept and which manufacturers will prevail and can offer long term perspective before taking the jump and making such a serious change to their IT.
  2. 'Old' hardware
    Hardware on the data plane can be 'dumber' (and cheaper) than current network hardware, but it has to support OpenFlow (or the manufacturer's equivalent)—which usually isn't the case for current hardware. Most companies would have to replace their entire network hardware before implementing SDN, which is an enormous cost factor.
  3. Implementation cost and effort
    Implementing SDN means completely restructuring the existing IT infrastructure. Personnel have to be trained, hardware must be updated (see #2) and a project of this amplitude will cause massive interference with regular business processes. What large corporations can afford with their own teams and external consultants is hardly feasible for smaller companies.

Is it Possible to Monitor SDN?

As a manufacturer of a monitoring solution that offers unified monitoring for small and midsized companies, Paessler is often asked about supporting SDN-controlled networks. Even software-controlled networks need functioning hardware. The control plane can maybe compensate for failure of single devices so that no direct damage occurs, but the overall network performance is still influenced by failures and disturbances. Proactive and complete monitoring of the data plane is absolutely essential even for SDN controlled networks. On the other hand, applications have to be docked to the control plane via interfaces in order to send and receive data. The interfaces and the applications have to be operating and available at all times and should be included in comprehensive network monitoring.
Current monitoring solutions, however, generally aren't yet able to monitor the control plane. Consistent standards and interfaces need to be established before manufacturers of monitoring tools can take action. Assuming that each SDN provider also provides standards with which the solutions can calculate and make available the corresponding control plane performance data, conventional monitoring software providers can jump in, pick up this data and integrate it in a central, comprehensive overview of the entire IT.
Established manufacturers of virtualization software like VMware, Citrix and Microsoft are good examples of this. Most monitoring solutions are now proficient in handling their standards comprehensively and integrating the virtual environment in overall monitoring. However, several years of hypes and drawbacks passed before virtualization was able to establish itself on the market. As virtualization really started to spread, well-known monitoring solutions were prepared. SDN will be similar: it will take years for administrators of midsized companies to think seriously about implementing SDN and how to maintain control of the control plane. If they have established, comprehensive monitoring solutions implemented in their networks, they can assume that they will be prepared for SDN by then, too.

Monitor Ping dari beberapa lokasi dengan PRTG




A few weeks ago we've presented the Cloud HTTP sensor, which enables you to keep an eye on the loading time of a web server via HTTP (Hyper Text Transfer Protocol) from various locations worldwide. Today we want to focus on another brand-new sensor that also uses the PRTG Cloud to enable monitoring from different international locations: The Cloud Ping sensor. Just add the web server you want to monitor to PRTG as a device and then add the Cloud Ping sensor to this device—and you're ready to start monitoring!

We have set up the PRTG Cloud, which is by the way also used to deliver push notifications to your smartphones running one of our PRTG apps, to provide you with data from remote locations distributed over five continents around the globe:
  • Asia Pacific: Singapore
  • Asia Pacific: Sydney
  • Asia Pacific: Tokyo
  • EU Central: Frankfurt
  • EU West: Ireland
  • South America: Sao Paulo
  • US East: Northern Virginia
  • US West: Northern California
  • US West: Oregon
Of course the sensor also allows you to display the global average response time.
If you want to know more about the Cloud Ping sensor, just have a look at the PRTG manual. This sensor type is currently in beta status, so we'd really appreciate your feedback on your experience with it! Also please note that currently you can only use 5 sensors of this type at the same time—for more information please have a look at the Knowledge Base article on "Allowed Number of Cloud Sensors per PRTG Installation".

Cluster Support for Remote Probes: Failover Nodes Show Remote Probe Data




We have good news for all of you who run a cluster installation of PRTG Network Monitor: You can now connect all of your remote probes to all of the failover nodes in your cluster! This lets you take full advantage of PRTG's failover functionality: You can see the monitoring data from all of your locations and receive alarms when your remote probes detect sensor errors, even if your primary master node fails. No matter whether you're using remote probes to monitor distributed locations or to spread the monitoring load over several machines, your remote probes can now be part of your cluster.

The main objective of PRTG's clustering feature is to provide high availability for your monitoring environment. Running a cluster installation maintains the uptime of your monitoring, without degradation due to failed connections, failed hardware, or during software upgrades. If the primary node in your PRTG cluster fails, one of the failover nodes takes over the role of the master and controls the cluster until the original master is back online. During the time that your primary server is unavailable, you can still review monitoring data on one of the failovers, control the configuration, receive all alarms, and get gapless uptime monitoring.
Now, with the latest release of PRTG Network Monitor, there is an important addition to the clustering feature: Your remote probes can connect to all of the cluster nodes now! If you run PRTG in a cluster, all remote probes can send their monitoring data to all of the cluster nodes.  With this additional functionality, you can see data from all of your remote probes even when the master server fails. You already know this functionality from the cluster probe, and now you have it for remote probes too. As of this release, you will never miss alarms for any devices on your remote probes, whether they be hardware failures or breached thresholds.
In previous versions of PRTG you did not lose monitoring data from remote probes when the master failed. If the connection between the remote probes and master server was unavailable, the remote probes would continue to run and buffer their sensor results in memory, and then you would receive the sensor results as soon as your probes could reconnect to the primary master. However, until now, you were not able to view the live data of those remote probes or to receive alarms while the master node was disconnected.
The new cluster support for remote probes resolves this issue. You can now keep all monitoring data and potential warnings and alarms from remote probes in view at all times.

How Cluster Support for Remote Probes Works

First, you need to allow remote probe connections to your failover nodes. So, log in to each failover server, open the Core Server tab in the PRTG Administration Tool, and select one of the options to accept connections from remote probes. Then, as soon as you acknowledge a new remote probe connection to your PRTG core server in the PRTG web interface, this probe will appear on all of your master and failover nodes. It connects automatically to the correct IPs and ports of the cluster nodes. You can define the cluster connectivity in the Administrative Probe Settings of a remote probe. By default, new probes send their data to all failover nodes. For existing probes, you will need to enable cluster connectivity first:
  • Open the Settings tab of an existing Remote Probe.
  • Navigate to the section Administrative Probe Settings.
  • For Cluster Connectivity, choose the option Probe sends data to all cluster nodes (see the manual for more details about this setting).

That's it! When you have ensured that communication between all your remote probes and cluster nodes is possible, you will immediately see the monitoring data from the remote probes on your cluster nodes, and all PRTG servers will show the same sensor values on the remote probes. The PRTG server that is responsible for management and configuration of remote probes is always the currently active Master node. For more details about remote probes in a cluster, please see the manual section Failover Cluster Configuration: Remote Probes in Cluster.

What Happens if the Primary Master Fails?

Let's say you run a PRTG cluster installation with two servers (which means, two cluster nodes) and one remote probe. You have set Cluster Connectivity as described above, so your remote probe is connected to both PRTG servers and transmits monitoring data to both of them. With this setup, you can see the data from the remote probe on both PRTG servers, the Primary Master and the Failover node. If the current master node fails, the failover node becomes the new master and, because of this, responsible for the remote probe. This means that this cluster node executes all tasks of the PRTG core server, including notifications. On this server, you can also still see all monitoring data from the remote probe during the time when the primary master is disconnected. Once the primary master is up and running again, the remote probe reconnects to this server and transmits the data which the remote probe buffered during the server outage. Thanks to this mechanism, you get gapless data from remote probes on all your cluster nodes, even if both of your cluster nodes fail.

What Should I Consider when Using Remote Probes in a Cluster?

Each remote probe sends monitoring data to each PRTG server in your cluster, so you will encounter increased traffic within your network. Usually, this will not cause problems and, as such, it is not a big disadvantage. However, if you do have any bandwidth limitations, you can simply switch off cluster connectivity for specific remote probes. If you decide to add remote probes to your cluster to increase the performance of your PRTG installation with the help of distributed monitoring, please also keep the higher bandwidth usage in mind to still enjoy the advantages of remote probes in a cluster!
Please note: This newly introduced feature allows you to connect probes to all cluster nodes. If you require high-availability for the remote probe itself, then please install a second remote probe on a machine next to your existing one, connect it to your PRTG installation, and then manually re-create all devices and sensors of the original probe on this second probe (for example, by cloning devices from the original remote probe). With this copy of the remote probe you can continue to monitor the desired devices even if the original probe fails.  And, of course, you can also connect this redundant remote probe to your cluster.
Update your PRTG installation now to get cluster support for your remote probes! If you don't have a failsafe network monitoring solution up and running yet, download PRTG and try it out for free.

Friday, July 17, 2015

Pilih DCIM atau Intllegence Patch Panel ?



Complete control of your IT infrastructure leads to faster incident response, cost savings, increased staff availability, and automation of routine tasks. But “control” involves much more than managing data connectivity and patch panels. All equipment should be documented and managed throughout its lifecycle to provide a total view of the data center’s physical infrastructure.
Patch Panel
Active patching solutions (“intelligent” patching) document patches on the active patch panel and focus on data connectivity management; away from active panels, however, these solutions provide management through paper work orders. This requires multiple manual updates, and is subject to error.
DCIM (Data Center Infrastructure Management) is an alternative to intelligent patch panels. Cormant-CS captures full information about the entire channel of connectivity, including infrastructure and assets. We’ve seen productivity and asset utilization increase by anywhere from 20% to 50% after deployment. Here are a few of its features, which separate it from intelligent patch panels.

Cormant-CS is an Excellent DCIM Choice


Cormant-CS DCIM
  • Mobility and Barcodes
  • Cormant-CS is fully mobile to make sure records are updated in real-time, ensuring data accuracy. Tablets and handheld computers support barcode and RFID scanning of cables. (For more on the importance of mobility to DCIM, look here.)
  • Improved Cost-to-Benefit Ratios
  • By maximizing the use of your entire infrastructure, you can potentially achieve millions in savings. The cost of Cormant-CS is typically 0.01% of the total cost of a data center or campus. Because active patch monitoring focuses only on structured cabling, this limits the savings it offers, and it costs between 20% and 35% of total structured cabling system costs. (Read this post for information about Getting Management to Approve DCIM.)
  • MACs in Physical IT Infrastructure
  • Cormant-CS’ combination of network, client, web and mobile components ensures extreme efficiency during connectivity and infrastructure changes, especially at the work area. All changes are made once, and are immediately documented on the handheld. This provides data confidence to improve speed and efficiency when planning further changes.
  • Cabling Independence
  • Completely configurable and independent of cabling vendors, you can document any cabling brand or connectivity type (including non-structured cabling). You’re never locked in to one vendor for panels and special patch cords, and you’re free to mix and match.
  • Integration
  • Unlike active patching systems, Cormant-CS exchanges data with other systems and hardware. The Cormant-CS web services XML API and SNMP/WMI/XML/CLI scripting engine come as part of the core product. With the built-in scripting engine c# and vb.net, scripts can be written to extend functionality, as well as to interact with network devices and services.
  • No Power or Additional Hardware
  • As a software/mobile solution, this IT infrastructure management tool doesn’t negatively impact environmental aspects of your data center. No additional planning is necessary to deploy.
  • Implementation
  • Cormant-CS can be implemented in new, existing or retrofit projects without downtime, conserving time and money. The software can even be installed to a virtual machine to speed up rollout. (Read our post on Implementing DCIM or our post with helpful DCIM Deployment Tips!)
  • Management of Real Problems
  • Cormant-CS users implement this solution to manage assets and connectivity, but also to manage power, heat, RU space, inventory, etc. – and all records are updated continuously.
  • No Spreadsheet Misery
  • All data is stored in a single database as a shared resource between groups. This allows direct access to the information each group needs, preventing wasted time and reducing workload.

Learn More About Enterprise DCIM Connectivity


Portable, practical and usable, the Cormant-CS enterprise DCIM connectivity and infrastructure management system provides a holistic approach to managing data center assets, power, connectivity and space.

Wednesday, July 15, 2015

6 Steps untuk Proses Automation



6 Steps to Bring Your IT Process Automation from Basic to Breakthrough

6 Steps to Bring Your IT Process Automation from Basic to Breakthrough With the popularity of IT Process Automation (ITPA) steadily on the rise, more and more organizations are embracing the power of technology for streamlining operations, improving efficiency and creating a more productive environment. That said, however, most of these companies are only just beginning to scratch the surface of the many benefits ITPA can offer. If your firm is among those merely in the beginning stages of leveraging ITPA, here are 6 steps you can implement to bring things to the next level.

Step 1: Conduct a Skills and Organization Assessment

The maturity level of an organization along with the ability of operations teams to communicate, collaborate and work to support one another can be directly tied to advanced IT Process Automation success. As your business matures, cross-organizational processes must be identified and adapted to create a more holistic approach to service delivery. Each individual team member should be assessed to identify those with the following skills, which are essential for any automation management professional:
  • In-depth knowledge and expertise of operations
  • In-depth knowledge and expertise of the company’s infrastructure
  • In-depth knowledge and expertise of the business as a whole
  • Experience
Training and reallocation of resources may also be necessary, based on the results of this assessment.

Step 2: Improve IQ Levels of Various Workflows

The next step involves developing and improving your team’s understanding of the workflow process. Specifically, improvements should be made in how personnel understands the complete concept of everything from designing, building and testing a workflow to deploying and administering it once it’s developed. As a process, improving workflow ID involves the following tasks:
  • Determine/plan what the workflows are
  • Figure out what to automate and when
  • Document all aspects of process workflows, including upgrades and revisions
  • Implement comprehensive change management strategies

Step 3: Standardize Both the IT Infrastructure and the Approach to IT process automation

A high level of IT maturity includes the management of a standardized infrastructure, which allows IT process automation to be implemented without the costs and complexities of non-standardized, diverse IT infrastructures. Similarly, the processes through which ITPA workflows are tracked and maintained should also be standardized. As a result, the organization can realize the following benefits:
  • Lower costs
  • Increased reliability and performance
  • Reduced workflow complexity
  • Streamline/automate repetitive tasks
  • Lower risk of error
  • Reduced need for checks and balances on automated workflows
  • Need for fewer skills
  • Improved tracking and reporting
  • Reduced reliance on high level expertise

Step 4: Establish Objectives and Manage Expectations

Establishing clear-cut objectives based on the reasons why automation is being considered will help determine which specific IT processes and workflows will actually be automated and in what order. This creates an accurate scope of the overall strategy for implementing and maximizing IT Process Automation. Objectives should include specific metrics which can be tracked and measured.
Expectations surrounding workload and cost reduction should also be set early and accurately. Successfully automating smaller tasks can be an excellent way to demonstrate value and provide a foundation upon which to build out more complex automation projects.

Step 5: Make a Concerted Effort to Control Costs

The costs associated with successful ITPA extend well beyond the initial investment in the automation tool. According to Gartner, managing an IT Process Automation tool can cost anywhere from 2 to 4 times as much as the product itself. Understanding and factoring in the expenses of all the various aspects is essential. To keep costs manageable, consider implementing the following strategies:
  • Try using existing automation technologies to start
  • Automate basic, repetitive tasks first
  • Standardize as many processes as possible
  • Carefully manage the initial expense of implementation, set up and customization

Step 6: Define the Value as Well as the Benefits

The value of IT process automation must be accurately measured against the time and other resources necessary to complete the process prior to automation. For many, the improved reliability, reduction in costly errors, quicker response times and enhanced efficiency are evident almost as soon as automation is implemented. The real ROI, however, will be realized as ongoing costs continue to go down or are eliminated altogether. To effectively assess the value and benefits of automation, consider the following metrics:
  • Pre-automation time necessary to accomplish the task manually
  • Pre-automation labor required to complete the process manually
  • Skills needed to accomplish the process prior to automation
  • Problems associated with manual processes (i.e. human error)
  • Downtime due to manual process issues (i.e. servers, applications, IT services, etc.)
  • Cost/difficulty of audit compliance prior to automation
Beyond the basic uses and benefits of IT Process Automation, this advanced technology has the potential to significantly improve business operations. By implementing the above steps, your IT department can tap into ITPA’s fullest potential and leverage its benefits to drive the ongoing success of your organization. It starts with the right partner.

Tuesday, July 14, 2015

Unified IT menggunakan ManageEngine OpManager

Comprehensive IT Operations Management Software for Data Centers and Large Enterprises

Most companies today rely on their IT for delivering business services to their end users. Any delay or disruption in the service delivery will affect the business very badly. IT teams strive hard to keep the MTTR to the lowest so that they can achieve their SLAs. However, it is easier said than done because IT involves various layers such as network, server and storage, and application. Without knowing where the fault is it is impossible to fix it quickly.
To spot the problem area quickly, a fair amount of knowledge into each of the IT layers is necessary. Only in-depth monitoring of these layers can provide such insights. On the other hand all these information should be able to correlate with other layers so that a root cause of the problem can be found without wasting much time.

Single console for entire IT operations management

OpManager with its plug-ins offers visibility into network, server and storage, and application layers with correlation between the fault and performance data of these layers from a single web console.

Network Monitoring

OpManager monitors network devices such as routers, switches, firewalls, load balances, wireless access points, etc. via SNMP and CLI protocols. It monitors performance metrics such as CPU, memory, interface traffic, errors and discards, packet loss, response time, etc.
Network Monitoring

Integrated ITIL-ready ServiceDesk

OpManager tightly integrates with ManageEngine ServiceDesk Plus (SDP) to give you an efficient IT fault management system to automate creation and assigning of trouble-tickets. You can now check out the knowledgebase in SDP for known solutions, investigate further on network and device performance through OpManager, check asset 'audit trails' in SDP and do a lot more using the powerful combination.
Integrated ITIL-ready ServiceDesk

Application Performance Management

Monitor 50+ mission critical applications such as Oracle, SAP, Websphere, Microsoft.Net and databases such as Cassandra, Sysbase, out of the box with Application Monitoring plug-in (powered by ManageEngine Applications Manager). You can monitor metrics such as response time, resource availability, and much more.
Application Performance Management

Network Traffic Analysis

Network Traffic Analysis plug-in (powered by ManageEngie Network Traffic Analyzer) controls LAN and WAN traffic by analyzing the flows exported by routers and switches. It supports a wide variety of flows such as NetFlow, sFlow, jFlow, IPFIX, etc. It provides information on devices, applications, and users that consume more bandwidth and streamline bandwidth by setting appropriate QoS policies.
Network Traffic Analysis

Network Configuration Management

Network configuration changes are critical and needs to monitored 24x7. NCM plug-in (powered by ManageEngine DeviceExpert) not only monitors network configurations for changes but also takes periodical backups of configuration changes. It also allows you to push configuration changes from its web GUI itself, via an approval board.
Network Configuration Management

Server Performance Monitoring

OpManager monitors physical servers such as Windows, Linux, Unix, and Solaris servers and virtual servers such as VMware, Hyper-V, and Xen servers. It monitors various performance metrics, which include, CPU, Memory, and Disk and processes, services, events, and much more.
Server Performance Monitoring

Storage Devices Monitoring

Storage plug-in (powered by ManageEngine OpStor) monitors more than 100 types of storage devices ranging from storage arrays, fabric switches, tape libraries, tape drives, and Hot Bus Adaptor cards. It helps you monitor performance metrics such as IOPS, Reads and Writes, and Cache configuration.
Storage Devices Monitoring

End-user Experience Monitoring

Application Monitoring plug-in (powered by ManageEngine Applications Manager) also helps you monitor end-user experience of business critical networks such as DNS, LDAP, Ping and Mail server from enterprise branch offices or actual customer locations. This helps you know whether the fault is at the data center or at the other end.
End-user Experience Monitoring

Intelligent Fault Monitoring

OpManager offer multi-level threshold for proactive monitoring which helps you find faults at it various stages and act accordingly. OpManager notifies faults immediately via email and SMS and also includes Workflow Automation to remediate such faults automatically.
Intelligent Fault Monitoring

Intuitive Dashboard and Widgets

OpManager offers dashboards and widgets, which helps you view the performance of your IT at-a-glance. You can create widgets and customize dashboards to your needs and the get the required information displayed. You can display these dashboards on NOC screens and monitor your IT operations 24x7.
Intuitive Dashboard and Widgets