Data Center is our focus

We help to build, access and manage your datacenter and server rooms

Structure Cabling

We help structure your cabling, Fiber Optic, UTP, STP and Electrical.

Get ready to the #Cloud

Start your Hyper Converged Infrastructure.

Monitor your infrastructures

Monitor your hardware, software, network (ITOM), maintain your ITSM service .

Our Great People

Great team to support happy customers.

Friday, January 03, 2014

Bukan Soal Jam Kerja

Re-thinking the Productivity; Bukan Soal Jam Kerja!

February 11 2013 | By Darus Salam
Sumber ilustrasi: http://wallpaperstock.net/sands-of-time-wallpapers_w28000.html

Produktivitas adalah tujuan utama sebuah perusahaan. Produktivitas yang tinggi adalah cerminan dari profitabilitas sebuah organisasi untuk dapat terus berkembang. Salah satu indikator produktivitas organisasi adalah produktivitas para karyawannya. Namun apakah produktivitas dapat dinilai dari panjangnya jam kerja seseorang?

Bila anda berpikir demikian, maka anda harus kembali memikirkan arti produktivitas. Selama bertahun-tahun, citra karyawan yang baik di Amerika adalah seseorang yang selalu pulang larut, tetap kerja pada akhir pekan, dan mendedikasikan hidupnya untuk pekerjaan. Namun benarkah demikian? Produktivitas bukanlah seberapa banyak anda bekerja, tapi berapa banyak yang dapat anda capai.

Perubahan paradigma mengenai produktivitas memunculkan banyak inovasi di tempat kerja. Saat ini sudah banyak perusahaan yang mengubah pola kerja tradisional ke pola yang lebih dinamis. Bila dulu karyawan harus masuk dan pulang pada jam tertentu, kini banyak perusahaan yang memperbolehkan karyawannya untuk bekerja secara remote.

Untuk mendorong produktivitas perusahaan dan sumber daya manusia yang dimilikinya, skala prioritas harus dibuat untuk setiap pekerjaan. Seperti yang dilansir Inc, paling tidak ada tiga kelompok yang bisa digunakan untuk mengategorikan sebuah perusahaan, yaitu:

Kelompok 1: Adalah pekerjaan yang harus dikerjakan dengan benar dan sekarang juga. Pekerjaan dalam kelompok ini adalah pekerjaan-pekerjaan yang krusial untuk core business perusahaan anda. Jangan sampai terlewat.

Kelompok 2: Adalah pekerjaan yang tidah harus sempurna. Meski demikian, pekerjaan ini tetap harus dikerjakan karena akan ketahuan bila tidak. Misalnya adalah penggunaan kertas berlogo perusahaan untuk korespondensi, mengaktifkan mesin penjawab bila sedang keluar kantor, atau mengemas setiap kiriman pos dengan baik.

Kelompok 3: Adalah pekerjaan yang berada di urutan terbawah dalam daftar anda. Pekerjaan-pekerjaan ini adalah pekerjaan yang dapat anda tunda, meskipun kadang mudah juga untuk anda lupakan. Perhatian harus diberikan pada pekerjaan yang paling penting, lalu berlanjut dengan level dibawahnya. Jangan sampai anda terjebak dengan pekerjaan kurang penting yang menguras waktu agar tidak memperkecil produktivitas anda.

Apakah DCIM Anda sudah punya hal dibawah ini?



Semua klaim dirinya sebagai DCIM (data center infrastructure management), apakah sudah memperhatikan beberapa hal di bawah ini ?

Suvish Viswanathan is the senior analyst, unified IT at ManageEngine, a division of Zoho Corp. You can reach him on LinkedInor follow his tweets at @suvishv. This is a last part of a three-part series.
SuvishViswanathanSUVISH VISWANATHAN
ManageEngine
In the second post in this series, we looked at the evolution of data center asset management and the degree to which it has evolved in parallel with traditional IT management. Ultimately, if you adopt a service-oriented management focus, the goal of both management efforts is the same — to enable the optimal delivery of a service to the end user. That said, the IT and facilities management worlds should not need to operate in parallel. Instead, they need to operate as one, in a truly integrated manner.
That’s what DCIM — data center infrastructure management — should be all about.
Now, before you go saying “Ah, DCIM — it is rubbish” (as, in fact, a journalist said to me just the other day), let me distinguish what I’m talking about from the DCIM that everyone else is talking about (which, I agree, is rubbish).
Unfortunately, DCIM has become one of those buzzwords in the marketplace that has no standard definition.  I recently saw one article which mentioned that more than 80 vendors claim to offer DCIM solutions. The problem with that is that most of them don’t. They may offer an IT or facilities management product that facilitates the management of one part of the data center infrastructure, but that’s a far cry from the kind of integrated DCIM solution that today’s fast-paced business needs.

The Shape of a Truly Integrated DCIM Solution

Data center management will never be performed efficiently if the IT infrastructure and facilities infrastructure are managed separately. Can you imagine your blade servers running in a room cooled to only 90°F? Should you really feel comfortable about the ongoing availability of your business-critical applications if you don’t know that the diesel tank fueling your back-up generator is only 10 percent full? Is it really possible to ensure the security of your infrastructure and the critical data it processes without a proper sensing mechanism in place?
When viewed through the lens of service delivery, all the assets in your data center are connected, and your ability to monitor and manage them needs to be equally as interconnected. A true DCIM must be able to do the following:
Collect Data. The data center is full of data collection nodes: IT systems collecting performance data in real time from servers, switches, data storage systems and more — as well as facilities infrastructure systems collecting data about rack temperatures, power consumption, backup generator fuel tank levels and more. These systems rely less and less frequently on an agent-based approach to reporting, so a DCIM solution must be able to collect data using a wide range of common communications protocols — from SNMP, WMI, SSH and the like for IT assets to Modbus, BACnet, LonMark and others for the facilities infrastructure assets.
The data capture features of DCIM need to support more than real-time infrastructure monitoring, too. The DCIM system must be able to reach deep into the broader infrastructure to pull granular data from individual pieces of equipment for planning and forecasting purposes.
Provide analytical support. Ultimately, the point of collecting data is to subject it to analysis and correlation, so a DCIM system needs a powerful analytical component. From a data center management standpoint, the analytical engine can facilitate decisions. These can be programmatic decisions, as when an alert might prompt the automated transfer of virtual machines from one server to another or automatically increase the airflow within a certain set of racks because of a sudden spike in CPU temperatures. Or, they can be strategic decisions taken by a committee, as when planners view DCIM data for environmental trends, application performance patterns or the broader user experience.
Accommodate the operator. A DCIM solution that can monitor and manage a wide range of assets — but only if those assets have been built by the same vendor that built the DCIM solution — is a non-starter. The days of a monolithic, single-vendor infrastructure are long past. In fact, just the opposite is true: The whole notion of the “data center” itself is becoming more and more fluid. If the data center is where an organization runs its mission critical applications and manages the delivery of the user experience, then parts of that data center may be in the cloud. Parts of that data center may reside in physically non-contiguous locations. And decisions about future data center elements may be governed as much by time-to-service delivery as physical location.
An integrated DCIM solution must accommodate a wide range of systems, tools, protocols and standards. It needs to be able to pick up alerts from different assets in the data center and send them to the appropriate authority (via email, SMS or whatever mechanism is preferred by the enterprise). All the elements in the infrastructure need to expose their APIs so that the management tools can understand and interact with them. This would give data center managers the flexibility they need to expand in the ways that will be best for their business (which a vendor lock-in never does).
Control and Automate. Today’s data centers are enormously complex. Some management issues need human oversight; others do not. A truly integrated DCIM solution can help you manage your resources so that issues that do require human intervention are flagged and escalated accordingly. The solution needs to be able to contact the person with the right skills, the right authority and the right access. It needs to be able to alert that person in a manner that is in keeping both with the severity of the issue and the policies and procedures of the organization itself.
For those issues that do not require human intervention, the DCIM must be able to handle them programmatically through various workflow automations. This enables you to focus your (highly intelligent, creative and skilled) human resource on the strategic management tasks that can enhance business productivity, the end-user experience or some other area that matters more to the enterprise.
Manage inventory centrally. Asset management is a major pain point in the data center, but a truly integrated DCIM solution can eliminate this pain through an automated asset discovery engine.
Such an engine would provide capabilities to crawl the data center infrastructure and discover all the devices and services involved — then feed those discoveries into a centralized repository such as a configuration management database (CMDB). Such a database would not be a mere manifest of detected devices, systems and services, though; for this database to be truly useful, it must enable data center managers to understand the relationships between the devices, systems and services. Thus, if a data center manager were planning a project to swap out a row of batteries, for example, the CMDB could let the manager know precisely which servers this row of batteries is backing up as well as precisely which mission-critical applications and services are running on those servers.
The practical impact of any asset change could be readily seen if this kind of DCIM were in place. It’s a hyperconnected world in the data center, which is why we need a truly integrated DCIM tool to handle it.

The Utility of Metrics

Finally, you’ll note that I have not mentioned any of the metrics we usually discuss when talking about data center management. Historically, many people have described data center management in terms of total cost of ownership (TCO), power usage effectiveness (PUE), data center infrastructure efficiency (DCIE) and other metrics. These are important metrics insofar as they can help a data center manager monitor and understand the data center from an environmental perspective. The green IT initiative is important, and failure to monitor with an eye toward the data center’s carbon footprint will have a significantly negative impact on both the company’s tax bill and public image.
However, these metrics provide only a fragmented view of overall data center performance. Data center infrastructure management needs to transcend that fragmented view. The data center is the nerve center of business today, and it needs to be managed with the organization’s service delivery goals in mind. There are human, resource and environmental components that we need to balance and manage effectively. Only by taking an approach that unifies, integrates and consolidates all these elements can we manage the entire data center in a manner consistent with our broader service delivery goals.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See ourguidelines and submission process for information on participating. View previously published Industry Perspectives in ourKnowledge Library.

Thursday, January 02, 2014

Bagaimana memasang Advanced Persistent Threat solusi yg tepat




How To Deploy the Most Effective Advanced Persistent Threat Solutions

Traditional defense tools are failing to protect enterprises from advanced targeted attacks and the broader problem of advanced malware. In 2013, enterprises will spend more than $13 billion on firewalls, intrusion prevention systems (IPSs), endpoint protection platforms and secure Web gateways. Yet, advanced targeted attacks (ATAs) and advanced malware continue to plague enterprises.
Lawrence Orans, research director at Gartner, provided additional commentary on how to analyze and compare different approaches and select complementary (as opposed to overlapping) solutions for detecting ATAs and malware.
Mr. Orans said:
The traditional defense-in-depth components are still necessary, but are no longer sufficient in protecting against advanced targeted attacks and advanced malware. Today's threats require an updated layered defense model that utilizes "lean forward" technologies at three levels: network, payload (executables, files and Web objects) and endpoint. Combining two or all three layers offers highly effective protection against today's threat environment.
To help security managers select and deploy the most-effective APT defense technologies, Gartner has developed the Five Styles of Advanced Threat Defense Framework. This framework is based on two dimensions: where to look for ATAs and malware (the rows), and a time frame for when the solution is most effective (the columns). The dashed lines between styles represent "bleed-through," since many vendor solutions possess characteristics of adjacent styles.
Figure 1: Five Styles of Advanced Threat Defense
Five Styles of Advanced Threat Defense

Style 1 — Network Traffic Analysis
This style includes a broad range of techniques for Network Traffic Analysis. For example, anomalous DNS traffic patterns are a strong indication of botnet activity. NetFlow records (and other flow record types) provide the ability to establish baselines of normal traffic patterns and to highlight anomalous patterns that represent a compromised environment. Some tools combine protocol analysis and content analysis.
Style 2 — Network Forensics
Network Forensics tools provide full-packet capture and storage of network traffic, and provide analytics and reporting tools for supporting incident response, investigative and advanced threat analysis needs. The ability of these tools to extract and retain metadata differentiates these security-focused solutions from the packet capture tools aimed at the network operations buyer.
Style 3 — Payload Analysis
Using a sandbox environment, the Payload Analysis technique is used to detect malware and targeted attacks on a near-real-time basis. Payload Analysis solutions provide detailed reports about malware behavior, but they do not enable a postcompromise ability to track endpoint behavior over a period of days, weeks or months. Enterprises that seek that capability will need to use the incident response features of the solutions in Style 5 (Endpoint Forensics). The sandbox environment can reside on-premises or in the cloud.
Style 4 — Endpoint Behavior Analysis
There is more than one approach to Endpoint Behavior Analysis to defend against targeted attacks. Several vendors focus on the concept of application containment to protect endpoints by isolating applications and files in virtual containers. Other innovations in this style include system configuration, memory and process monitoring to block attacks, and techniques to assist with real time incident response. An entirely different strategy for ATA defense is to restrict application execution to only known good applications, also known as "whitelisting".
Style 5 — Endpoint Forensics
Endpoint Forensics serves as a tool for incident response teams. Endpoint agents collect data from the hosts they monitor. These solutions are helpful for pinpointing which computers have been compromised by malware, and highlighting specific behavior of the malware.
Because of the challenges in combating targeted attacks and malware, security-conscious organizations should plan on implementing at least two styles from this framework. The framework is useful for highlighting which combinations of styles are the most complementary. Effective protection comes from combining technologies from different rows (for example: network/payload, payload/endpoint or network/endpoint). The same logic applies to mixing styles from different columns (different time horizons). The most effective approach is to combine styles diagonally through the framework.
More detailed information on the framework and how security managers can select and deploy the most effective APT defense technologies can be found in the report “Five Styles of Advanced Threat Defense”. The report can be found on Gartner’s website at http://www.gartner.com/resId=2576720.
Mr. Orans will provide additional insight into cybersecurity at Gartner Symposium/ITxpo 2013 taking place October 6-10 in Orlando, Florida.

Tuesday, December 31, 2013

Vigor 2925 series, router multiguna untuk kantor / cabang anda



Vigor2925 Series is the IPv6 ready dual WAN broadband security firewall router. It ensures the business continuity for today and the future IPv6 network. Its two gigabit Ethernet WAN port can accept various high-speed Ethernet-based WAN links via FTTx/xDSL/Cable. The 2 USB ports are for 3G/4G LTE mobile broadband access. With the multi-WAN accesses, Vigor2925 routers support bandwidth management functions such as failover and load-balancing, making them ideal solutions for reliable and flexible broadband connectivity for the small business office. 
The specifications cover many functions that are required by modern day businesses, including secure but easy to apply firewall, comprehensive VPN capability, Gigabit LAN ports, USB ports for 3G/4G mobile dongles, FTP servers and network printers, VLAN for flexible workgroup management, and much more.

2925n 
Load-balance and backup
You can combine the bandwidth of the Dual WAN to speed up the transmission through the network. The Gigabit WAN port is ideal for connection to fast Internet feed such as Fiber and VDSL2. The 10/100 Base-Tx port can act as back up or primary WAN, which is suitable for sharing bandwidth of xDSL or cable modem. In case of your primary ISP or DSL line suffering temporary outage, WAN-backup offers you redundancy to let the secondary Internet access temporarily route Internet traffic. All traffic will be switched back to your normal communication port as services are resumed. 

The Dual WAN features of Vigor2920 series ensure your operational efficiency and business network continuity. 
2925 cloud-base-1
 

VPN failover (backup)
The VPN failover (backup) ensures the stable LAN-to-LAN (site-to-site) remote access
Vigor2925 vpn trunk


VLAN for Secure and Efficent Workgroup Management
Vigor2925 4

 AP Management
APM


 Security & Productivity -SSID
v2925n-1


 Flexibility WLAN
v2925n


Support Smart Monitor up to 50 PC Users


Vigor2925 5


 Vigor2920 series with rackmount
rackmount-black-2925

Draytek Vigor mendukung SMS alert untuk notifikasi Engineer

Draytek Vigor mendukung SMS alert untuk notifikasi Engineer.

How to Send a Notification via SMS Alert Service

The Vigor router supports SMS Alert Service, which can keep Administrator getting the latest router status. In this note, we take the VPN alert service on Vigor3900 as an example.

  1. Go to Object Setting >> SMS Service Object, and click Add to create a new service object.
    1

  1. Set SMS Service Object, and click Apply.
    2
    1. Tick Enable.
    2. Enter the Profile name, and set the SMS provider configuration.

Note: Quota will decrease by sending out each SMS alert.

  1. Go to Object Setting >> Notification Object, and click Add to create a new notification profile.
    3

  1. Set Notification Object, and click Apply.
    4
    1. Enter the Profile name.
    2. Tick VPN Disconnection and Reconnection.

  1. Go to Applications >> SMS/Mail Alert Service >> SMS Alert Service, and click Edit to set the alert service.
    5

  1. Set SMS Alert Service, and click Apply.
    6
    1. Tick Enable, and set the service configuration.

  1. Administrator will recieve the SMS text when the VPN connectivity status changed.
    text


Read 17 times
Last modified on Friday, 01 November 2013 07:41

High Availability dalam PRTG Cluster adalah default feature

Dalam PRTG, fitur High Availability secara default kita akan mendapatkannya, dan dapat mengkonfigur PRTG dalam Cluster


PRTG Cluster Basics

One of the major new features in version 8 of PRTG Network Monitor is called “Clustering”. A PRTG Cluster consists of two or more installations of PRTG Network Monitor that work together to form a high availability monitoring system.
The objective is to reach true 100% percent uptime for the monitoring tool. Using clustering, the uptime will no longer be degraded by failing connections because of an Internet outage at a PRTG server’s location, failing hardware, or because of downtime due to a software upgrade for the operating system or PRTG itself.

How a PRTG Cluster Works

A PRTG cluster consists of one “Primary Master Node” and one or more “Failover Nodes”. Each node is simply a full installation of PRTG Network Monitor which could perform the whole monitoring and alerting on its own. Nodes are connected to each other using two TCP/IP connections. They communicate in both directions and a single node only needs to connect to one other node to integrate into the cluster.

Normal Cluster Operation

Central Configuration, Distributed Data Storage, and Central Notifications.
During normal operation the “Primary Master” is used to configure devices and sensors (using the Web interface or Windows GUI). The master automatically distributes the configuration to all other nodes in real time. All nodes are permanently monitoring the network according to this common configuration and each node stores its results into its own database. This way also the storage of monitoring results is distributed among the cluster (the downside of this concept is that monitoring traffic and load on the network is multiplied by the number of cluster nodes, but this should not be a problem for most usage scenarios). The user can review the monitoring results by logging into the Web interface of any of the cluster nodes in read only mode. As the monitoring configuration is centrally managed, it can only be changed on the master node, though.
If downtimes or threshold breaches are discovered by one or more nodes only the primary master will send out notifications to the administrator (via email, SMS, etc.). So, the administrator will not be flooded with notifications from all cluster nodes in the event of failures. BTW, there is a new sensor state “partial down” which means that the sensor shows an error on some nodes, but not on all.

Failure Cluster Operation

  • Failure scenario 1
    If one or more of the Failover nodes are disconnected from the cluster (due to hardware or network failures) the remaining cluster continues to work without disruption.
  • Failure scenario 2
    If the Primary Master node is disconnected from the cluster, one of the failover nodes becomes the new master node. It takes over control of the cluster and will also manage notifications until the primary master reconnects to the cluster and takes back the master role.

Sample Cluster Configurations

There are several cluster scenarios possible in PRTG.
  • Simple Failover. This is the most common usage of PRTG in a cluster. Both servers monitor the same network. When there is a downtime on Node 1, Node 2 automatically takes over the Master role until Node 1 is back online.

    Cluster - Simple Failover
  • Double Failover. This is a very advanced Failover cluster. Even if two of the nodes fail, the network monitoring will still continue with a single node (in Master role) until the other nodes are back online.

    Cluster - Double Failover
  • The following four-node-scenario shows one node in disconnected mode. The administrator can disconnect a node any time for maintenance tasks or to keep a powered off server on standby in case another node’s hardware fails.

    Cluster - Four Node Failover

Usage Scenarios for the PRTG Cluster

PRTG’s cluster feature is quite versatile and covers the following usage scenarios.

Failover LAN Cluster

PRTG runs on two (or more) servers inside the company LAN (i.e. closely to each other in a network topology perspective). All cluster nodes monitor the LAN and only the current master node will send out notifications.
Objectives:
  • Reaching 100% uptime for the monitoring system (e.g. to control SLAs, create reliable billing data and ensure that all failures create alarms if necessary).
  • Avoiding monitoring downtimes

Failover WAN or Multi Location Cluster

PRTG runs on two (or more) servers distributed throughout a multi segmented LAN or even geographically distributed around the globe on the Internet. All cluster nodes monitor the same set of servers/sensors and only the current master node will send out notifications.
Objectives:
  • Creating multi-site monitoring results for a set of sensors;

    —and/or—
  • Making monitoring and alerting independent from a single site, datacenter or network connection.

PRTG Cluster Features

  • Paessler’s own cluster technology is completely built into the PRTG software, no third party software necessary
  • PRTG cluster features central configuration and notifications on the cluster master
  • Configuration data and status information is automatically distributed among cluster members in real time
  • Storage of monitoring results is distributed to all cluster nodes
  • Each cluster node can take over the full monitoring and alerting functionality in a failover case
  • Cluster nodes can run on different operating systems and different hardware/virtual machines; they should have similar system performance and resources, though
  • Node-to-node communication is always secure using SSL-encrypted connections
  • Automatic cluster update (update to a newer PRTG version needs to be installed on one node only, all other nodes of the cluster are updated automatically)

What is Special About a PRTG Cluster (Compared to Similar Products)

  • Each node is truly self-sufficient (not even the database is shared)
  • Our cluster technology is 100% “home-grown” and does not rely on any external cluster technology like Windows Cluster, etc.