Data Center is our focus

We help to build, access and manage your datacenter and server rooms

Structure Cabling

We help structure your cabling, Fiber Optic, UTP, STP and Electrical.

Get ready to the #Cloud

Start your Hyper Converged Infrastructure.

Monitor your infrastructures

Monitor your hardware, software, network (ITOM), maintain your ITSM service .

Our Great People

Great team to support happy customers.

Friday, November 17, 2017

Happy birthday PRTG

Happy birthday PRTG
From our member team at PT Daya Cipta Mandiri Solusi
We are your Gold Partner since 2009
We installed PRTG in many companies, and we proud of it


4 Reasons Why Log Management is Key to CyberSecurity

The Blame Game: Identifying The Culprit During Security Incident Response

 1014  0
After a serious IT security incident is discovered, the priority is to shut it down and recover quickly in a cost-effective manner. However, management will want to find the root of the problem so that they have a place to point the finger, but this is often easier said than done.
Security incidents require a time and labor-intensive investigation to uncover cybercrime techniques and sift through massive amounts of data. Incidents that involve a privileged account prove to be even more challenging as authorized insiders or external hackers who have hijacked credentials can modify or delete logs to cover their tracks.
Sophisticated and well-funded cyber criminals often target privileged accounts because they hold the keys to the kingdom, allowing criminals to steal data on a massive scale, disrupt critical infrastructure and install malware. Under the guise of privileged users, attackers can lurk within systems for months, gaining more and more information and escalating their privileges before they are even discovered.
In addition to deliberate attacks, human error is also a factor to consider during an investigation. For example, an inexperienced administrator may have accidentally misconfigured a core firewall, turning a quick resolution into an overwhelming investigation. IT staff members often use shared accounts such as “administrator” or “root”, making it extremely difficult to determine exactly who did what. With this degree of uncertainty, it is easy to start the blame game between parties.
One way to simultaneously combat the threat of external hackers and human error is to collect relevant and reliable data on privileged user sessions. This allows investigators to easily reconstruct user sessions and can reduce both the time and cost of investigations.
In addition to user session monitoring and management, having an incident management process in place will be critical to ensure quick and effective identification of a threat source.
The Incident Management Process
To identify an incident and respond quickly, organizations need to develop a multi-step management process that they can consistently rely on. For starters, the NIST and the CERT/CC has outlined a step by step process for incident management by ISO 27002. These encourage a consistent approach, especially for those organizations under strict compliance regulations. Businesses are expected to regularly define, and in the case of a security event, execute an incident response procedure. They must establish that they are capable of taking action when critical assets are endangered.
The CERT/CC concept has four components. First, an incident is reported or otherwise detected (detection component). Second, the incident is assessed, categorized, prioritized and is queued for action (triage component). Thirdly, they must conduct research on the incident to determine what has occurred and who is affected (analysis component). Finally, specific actions are taken to resolve the incident (incident response component). Essentially, organizations need to find a process like this that they can implement and reference in the case of a security breach.
Identifying and Acquiring Data Sources
Deep investigations require organizations to first identify and then collect the data in question. This is the first step in any forensic process. Data sources may include security logs, operations logs and remote access logs that have been created on servers. They can also span client machines, operating systems, databases, and network and security devices. Investigations that involve privileged accounts could also include session recordings, or playable audit trails that can be critical in uncovering what has happened.
Once the data is in sight, the analyst must then acquire it. Some log management tools will centrally collect, filter, normalize and store log data from a wide range of sources to simplify the process. For cases involving privilege misuse, data must also be collected from privileged session recordings.
With all the data in hand, it must then be verified to ensure its integrity. This might include protecting against tampering through the use of encrypted, time-stamped and digitally signed data.
Examination and Analysis 
During an investigation, each piece of data must be closely examined in order to extract relevant information. By combining log data with session recording metadata, the examination of privileged account incidents can be expedited dramatically.
Once the most critical information has been extracted, the analysis process begins. Through machine learning, organizations can analyze privileged user behavior and detect when behavior falls outside their normal operating parameters. When combined with replayable audit trails showing logins, commands, windows or text entered from any session, this can provide a full picture of the suspicious activity. With all of these elements, analysts can create a full timeline of events for the reporting phase.
Reporting and Resolution
Once all of the data is analyzed, the laborious reporting process can begin. Rapid investigations and the ability to make quick, informed decisions can be challenging and require real-time data about the context of a suspicious event. In these scenarios, access to risk-based scoring of alerts, quick search and easily interpreted evidence can expedite the process.
In today’s fast-moving threat landscape, organizations must have capabilities in place to secure critical assets by managing and monitoring privileged accounts and access. Alongside a robust incident management process, businesses can be prepared for when an incident occurs, and with access to the right data, along with the ability to easily sort through it, they will be empowered to quickly uncover the source of the incident and future-proof systems.
Csaba Krasznay, Security Evangelist at Balabit

Log management plays a serious role in identifying IT security incidents.  Whether you are attacked by a sophisticated cyber criminal or experience a breach due to human error, it is crucial that you get to the heart of the problem quickly and efficiently.  
Luckily, Nagios Log Server makes it easy to interpret, graph, store and manage your system log data so you can easily investigate and correct the problem.  Download the fully-functional trial here.


source: http://www.informationsecuritybuzz.com/articles/blame-game-identifying-culprit-security-incident-response/

5 Strategi Efektif Hyperconvergence (Gartner)

Five Keys to Creating an Effective Hyperconvergence Strategy

 FOUNDATIONAL Refreshed: 06 February 2017 | Published: 29 October 2015 ID: G00292684
Analyst(s):
 

Summary

The true value proposition in hyperconverged systems is often missed in evaluations because of excessive hype among vendors. Here's a framework that I&O leaders can use that cuts through the hype.

Overview

Key Findings

  • IT leaders are inundated by what hyperconvergence means and the potential benefits, often by hype from the vendors.
  • The integrated system — contrary to its identification with simplicity — actually puts a heavy burden of complexity on IT leaders who make strategic infrastructure decisions for IT corporate services, operations and development.
  • Hyperconvergence expands the variety of choices available to IT leaders, but may add complexity and confusion about what claims of simplicity and flexibility mean to you specifically.
  • Five key attributes can enable planners to cut through the hype and make more-effective hyperconvergence decisions.

Recommendations

  • Create a compelling strategic hyperconvergence evaluation composed of the following five key decision attributes: simplicity, flexibility, selectivity, prescriptive and economic.
  • Parse vendors' claims spread across the five decision determinants by validating their application, use and benefits to your strategic infrastructure objectives.
  • Define, weigh and rank each of these factors according to your needs, and their importance to projects, use cases, in-house technical expertise, budget and objectives.
  • Combine these five key attributes with other technical evaluation criteria on performance, scaling, resilience, security and availability.
  • Select the best supplier finalists by proofs of concept that deliver on both your performance objectives and the five determinants most important to your IT and business needs.

Analysis

Prioritize and Define Five Key Hyperconvergence Determinants by IT-Business Objectives

We sifted through hundreds of Web pages, presentations, briefings, notes and other materials from vendors, consultants and clients over the past two to three years. One reason for this effort was that, increasingly in this crowded market teeming with new entrants and claims of superiority, several attributes repeatedly kept appearing. Systems were almost always declared as simple and easy to use and deploy; highly flexible for a wide range of tasks; offering multiple choices of software and hardware partners; economical; and as having prescriptive qualities and specifications that always maximized performance, utilization, availability and other benefits. We also observed that many of our clients were confused about how best to make a decision that could have lasting positive (or negative) effects in their data centers, depending on the wisdom of their selection. So we developed a framework of five key decision determinants that can be used in hyperconvergence integrated system (HCIS) RFPs to arrive at the most appropriate selection: simplicity, flexibility, selectivity, prescriptive and economic (see Figure 1).
Figure 1. Key Determinants in a Hyperconverged Integrated System Decision
Research image courtesy of Gartner, Inc.
Source: Gartner (October 2015)
Here are examples of the cadence in the literature we found on two of the most-oft cited HCIS benefits: simplicity and flexibility. Simplicity: (1) combines all the infrastructure below the hypervisor, eliminating the need for about a dozen discrete infrastructure and software products; (2) simplifies and streamlines common workflows, eliminating the need for disparate management solutions; and (3) pools and allocates software-defined and physical resources through a single, user-friendly interface. Flexibility: (1) provides the flexibility to pool commodity local hard-disk drive (HDD) storage with RAM and/or flash across multiple server farms; (2) features pay-as-you-grow pricing that offers more flexibility to scale the environment as needs grow; and (3) enables the same systems to act as backup/disaster recovery targets and restore workloads when needed.
Indeed, many more beneficial attributes of these two categories could be added, such as low click provisioning; centrally managed remote distributed sites; fast setup, install and provisioning; scale-out and up; bimodal agility; architecturally adaptable to broad use cases; and so on. When decision time comes, how important are these determinants in the decision process? Perhaps an articulate vendor, or a strong communicating channel, may deliver a potent message of one or a few of these as strengths. Or they may intermingle them among the many other technical minutiae that they are anxious to convey, such as input/output operations per second (IOPS), latency and response times, snapshots, deduplication, tiered storage, etc. Of course, the latter are also important. So we suggest that IT leaders and planners compose a hierarchy of priorities among these determinants as a complementary analysis to the technical dimension. Every IT organization should have its own version of what simplicity, flexibility, selectivity, prescriptive and economic will mean to their organizations in particular (e.g., how they may impact: agility, head count, service catalog offerings, environmental footprint, etc.).

Why the Five Keys Play an Important Complementary Role to Technical Evaluations

Hyperconverged infrastructures potentially represent an important new milestone in delivering lean and agile infrastructures. Gartner calls such systems Mode 2-type platforms for the fast and agile digital business world (see "Kick-Start Bimodal IT by Launching Mode 2" ). HCIS is still several years from commonplace Mode 2 deployments; as such, infrastructures must effectively be fiercely adaptable to managing the rapidly changing and evolving competitive and consumer market. Such environmental forces demand not only technical speed, but also require elastic resource pools, intelligent fabric infrastructures, hypervisors, container-based and open-source ecosystems, quick deployments and retirement, various application templates, automation and orchestration, and hybrid cloud potential. Such Mode 2 systems must satisfy the paradigm of develop, deploy, fail/change often, recover and rejuvenate. Such modus operandi will not be self-evident in pure Mode 1 static and scalar metric evaluations alone, as with most of today's infrastructure. The five keys should help to flesh out the more subtle qualities in the offerings. The difficulty for planners, architects, business management and CIOs is understanding what the vendors associate and imply with their products as simple, flexible, selective, prescriptive and economical, and relating them to your own business needs. In this research, we will provide some of the correlates of the keys, with continuing research on identifying best practices that enable qualitative and quantitative analysis in evaluating and positioning the numerous HCIS, which are now marketed by virtually all system vendors in conjunction with channel and software partners. Here are descriptions to start the evaluation process.

Simplicity

To be simple is not merely to be configured simply, or to be operationally simple. Simple suggests a full life cycle, including upgrades to components for technological advantage (e.g., power consumption); transparent software management enhancements; automated diagnosis and repair; one-stop maintenance; click-and-run provisioning; resource pool fluidity; automated file system management, sharding, tiering and storage reclamation; under-the-hood performance and reliability logical views; etc.

Flexibility

To be flexible often implies commodity parts or SKUs for various use cases and space requirements, or to scale to accommodate various use cases. However, flexibility can have both technical and business connotations. Many users are averse to breaking down existing walls or silos for yet another silo. Systems with high degrees of flexibility should be able to "blend" with existing infrastructure and applications or previous-generation systems through interoperability, offloading and tiering. They may also assume chameleon properties as applications change. They may be able to linearly scale independently by assigned roles by nodes for compute, storage, security and networking.

Selectivity

To be selective extends beyond being flexible, with product, module, rack or node choices; software automation and management; hypervisor selection; centralized and distributed IT services management, etc. Of high importance is whether a system supplier presents a locked-down appliance with a fixed menu of options, or enables key partnerships with innovative hardware and software vendors who agree to integrate, test and validate their solutions on the main supplier's platform. Selectivity as a characteristic may even conflict with simplicity, requiring the IT-business planning and review committee to make trade-offs as part of short-term and longer-term goals. If IT hardware skills exist but orchestration management has been weak, the bias could be shifted toward a strong hardware/software partnership, where these two disciplines are delivered transparently; save wasteful hours of development, test, run and revise time; and increase useful life.

Prescriptive

The prescriptive approach leans heavily on meticulous component selection, integration and tuning, at both hardware and system software levels, complemented by rich, functional software that abstracts and manages components to generate maximum system utilization. The key is achieving predictable performance and availability, with an ability to handle almost anything you can throw at it, as a result of carefully engineered design. The vendor will bet its business model's success on performance as its distinguishing trademark. The IT organization, in turn, will accept the prescribed configuration as long as it runs its applications at predictable service levels with high capacity and utilization for IT and business user needs. These systems may somewhat compromise flexibility in order to deliver the higher priority of performance and predictable behavior. Alternatively, they may point toward a non-HCIS integrated and converged solution.

Economic

"Economic" is a term that should be defined by planners as well as financial and procurement managers. Some IT organizations or procurement departments seek to optimize capital expenditure, while others seek to shift cost burdens to operating expenditure (e.g., cloud). An HCIS decision can focus on the potential total cost of ownership and operational cost savings of appliances, with relatively limited scalability. Until these systems mature, they may not approach nor emulate the scalability and mission-critical attributes of converged infrastructure systems. A cost analysis and comparison with existing infrastructure is always recommended but, in most cases, is very difficult to execute. Most organizations lack a stable base of comparison, factor in the migration or modernization costs, and may also claim that their engineering prowess already exists to create nearly the same equivalence as the packaged systems. We have heard the latter argument often enough to estimate that vendors of all types of integrated systems may, in reality, only have an addressable market of 50% of the total system market through the next three years. Alternatively, those planning "greenfields" and new data center locations are motivated by jump-starting agile and simpler infrastructure to manage and maintain at lower costs.
It's important to note that you need not restrict yourself to these five categories exclusively. You may find a subset as satisfying your evaluation needs, or you may wish to add to them (for example, an argument might be made to include agility). We prefer it be subsumed under simplicity or flexibility, or both. Simplicity, for example, may deliver features that contribute to the increased agility of the system. What are those features? As a separate category, you may want the supplier to articulate the precise features that deliver increased levels of agility for Mode 2 operations. Having a separate category can make the deliverables more compelling and clear.
Advice: Ensure that vendors explain in depth their application of the five key determinants in their solutions to add precision and depth to your decision on how well they match your needs.

Conclusions: Rewrite the Narrative

IT and business leaders who will be responsible for important service delivery must team together and articulate their individual perspectives, derived from the important evaluation attributes of the five key determinants. By laying on the table issues such as:
  • What are the pain points that slow our response times down?
  • Why are we failing so often and taking so long to recover?
  • Why does it take so much time to set up configurations each time a new application is presented to us?
  • Why do we have to be "plumbers" and burrow into the nuts and bolts of the system to find why and where performance slowed?
  • Why do our RFPs fail to deliver what we anticipated?
  • Why are we engaged so often with vendors denying their responsibility for outages or degraded operations?
The five key determinants are designed to break the spell of faster, less expensive refresh cycles over and over again by rewriting the narrative. When it comes time for a refresh, IT planners, engineers, architects and business leaders should design a new narrative. The new actors in this narrative will don different clothing from the standard "double the performance at 30% lower cost." Now, the search should uncover real need-driven value where the devil will be in the details.

Evidence

Some of the principles in this research were tested in a Gartner Research Circle live chat forum. The Gartner Research Circle is a managed panel of IT and business leaders. A screener questionnaire to examine current positions on hyperconvergence was sent to members in North America on 2 September 2015. Ninety-one members responded, and 12 members went on to participate in a moderated live chat on 17 September. All live chat participants were familiar with the term in the early stages of discussion or evaluation. Research was developed describing the results in detail.
The five determinants were developed in research conducted over a three-month period into virtually all vendors' communications, Web-based product descriptions, vendor briefing documents and presentations, and in-person discussions. In addition, numerous client interactions revealed interest factors and motivations in investigating integrated systems.