Data Center is our focus

We help to build, access and manage your datacenter and server rooms

Structure Cabling

We help structure your cabling, Fiber Optic, UTP, STP and Electrical.

Get ready to the #Cloud

Start your Hyper Converged Infrastructure.

Monitor your infrastructures

Monitor your hardware, software, network (ITOM), maintain your ITSM service .

Our Great People

Great team to support happy customers.

Saturday, December 29, 2012

4 Kunci untuk memonitoring layanan CLOUD Anda.

Organizations are becoming increasingly interested in leveraging cloud computing services to improve flexibility and scalability of the IT services delivered to end-users. However, organizations using cloud computing services face the following challenge: decreased visibility into the perfor­mance of services being delivered to their end-users.
Many cloud providers offer dashboards for tracking availability of their services as well as alerting capabilities for identifying service outages in a timely manner, but these capabilities are not sufficient for end-users who need to have a full control of the performance of cloud services in use. More importantly, organizations cannot rely on monitoring capabilities offered by their cloud service providers, and they need to deploy third-party solutions that allow them to monitor the performance and levels of SLA achievements of cloud services.

Overview of Cloud Computing
The term Cloud Computing stands for a type of service that allows organizations to deliver business-critical applications to their employees, cus­tomers, and partners over the Internet. End-users are able to access this data using a web browser, while organizations are able to improve flex­ibility and scalability of IT services and pay only for computing resources that they actually use.

There are three types of cloud computing services: public, private, and hybrid.
Public cloud computing services are hosted by third-party service providers, such as Amazon, Google, Rackspace, GoGrid, and VMWare and allow organizations to use externally hosted computing resources while paying only for computing resources that they actually use. This method of cloud computing fits the most common definition of cloud services, and it is most appealing especially for small and medium sized organizations.

Private cloud services are hosted by end-user organizations themselves to support their internal needs and allow their business users to access business-critical data services over the Internet. Even though this type of cloud computing does not completely fall under a traditional defini­tion of cloud computing (computing resources are not hosted and managed by third-party providers), private cloud services are getting a lot of traction from end-user organizations. Deployment of these services does not require an involvement from external providers, but private cloud services are still helping organizations to achieve the majority of promised benefits of cloud computing.

Hybrid cloud computing services represent a combination of IT services that are based on hosting computing resources for supporting business-critical applications both in the cloud and in the externally managed data centers. This allows organizations to keep internal control over comput­ing resources while complimenting these resources with the cloud capacity that users can access over the Internet.

Top Management Challenges
Inability to identify applications that could be seamlessly moved to the cloud
Before making decisions about applications that should be moved to the cloud environment, organizations should make a calcula­tion about IT and business benefits that they can achieve from this action. Additionally, organizations should have capabilities in place to test whether the cloud infrastructure they are using can support applications that are being transferred to the cloud.

Unfortunately, many organizations do not have technology capabilities that would allow them to conduct this type of testing and, therefore, they are forced to make adjustments to available capacity as they experience problems with quality of service.
This type of challenge prevents organizations from achieving one of their top goals for managing the performance of business-critical applications: prevent performance issues from occurring before end-users are impacted.

Inability to make educated decisions about adding or terminating cloud resources
Deploying cloud computing services changes the way organizations go about managing their computing resources, as it gives them more flexibility in using available capacity in the way that is the most cost effective. Instead of making costly investments in new hardware when they need additional capacity, organizations have the ability to increase and decrease cloud resources used as the demand changes. In order to take a full advantage of this capability, organizations need to have full visibility into how their existing resources are being used in both internal and external environments.

Organizations need to have capabilities for monitoring usage of the cloud resources that would also alert them when they need additional resources and about applications for which these additional resources are needed. These monitoring capabilities include tools for monitoring CPU usage per computing resource, ratios between systems activity and user activity, and CPU usage from specific job tasks. Also, organizations should have capabilities for predictive analytics that allow them to capture trending data on memory utilization and file system growth, so they can plan needed changes to computing resources before they encounter service availability issues. Not having these capabilities in place prevents organizations from taking timely actions for optimizing cloud resources in use to meet changes in business demand.

Inability to monitor performance of applications that use a hybrid cloud approach
Organizations using cloud computing services need to have visibility not only into the performance of applications that have moved to the cloud, but also into the different computing resources on which these applications depend. Typically, organizations find it easier to monitor the performance of applications that are hosted at a single server as opposed to the performance of composite applications that are pulling computing resources from different sources. This issue becomes even more complex if computing resources are hosted outside of corporate firewalls, and organizations do not have a full control and visibility into the performance of these applications.
As mentioned earlier, organizations sometimes use a hybrid model for deploying cloud computing, which presents end-user orga­nizations with the challenge of monitoring usage of resources that are hosted and managed both externally and internally and are being used by the same application.

Improving scalability of the infrastructure creates heterogeneous environments that are difficult to manage
Even though organizations can achieve significant cost savings and increased flexibility of management by moving their business-critical applications into the cloud, this also creates a new environment that is fairly complex to monitor and manage. As a result, traditional IT management tools are not as effective in these environments as they are in managing the performance of internally hosted applications. This creates the challenge of finding a balance between scalability and flexibility of computing resources and

Capabilities Needed

Tools for measuring the impact of rules for assigning cloud resources on quality of end-user experience
One of the key benefits of cloud computing services is flexibility of assigning resources needed to support demand from business users. In order to achieve this benefit, many organizations deploying cloud computing services are defining rules for assigning cloud resources to each of their critical IT services and applications. However, the effectiveness of these policies depends on the visibility that organizations have into how cloud resources are being used. Organizations that have technology tools in place to monitor how changes in policies that control allocation of cloud computing resources impact the performance of business-critical applications, as measured from end-users’ perspective, are more likely to reap the full benefits from the deployment of cloud computing.

Ability to compare cloud service delivery to performance of the internal environment
Organizations can garner the full benefits of cloud computing services only if they can ensure that the performance of these servic­es as experienced by business users is at optimal level. The best way for organizations to evaluate the performance of these services is to compare them to the performance of those services hosted and managed internally.
Having technology capabilities that allow organizations to measure key performance indicators (KPI) for application performance in both cloud and internal environments allows organizations to define proper benchmarks for evaluating performance of cloud services and make better decisions about value received from making changes in their IT management strategies.

An independent tool for monitoring/validating performance of a heterogeneous set of applications in the cloud
As organizations deploying cloud computing services trust third-party providers to deliver quality of service that would be accept­able to the end-users, they need to have technology tools in place to enable them to keep their service providers “honest” and have capabilities for monitoring levels of SLA achievements that go beyond monitoring capabilities provided by cloud vendors. As a part of their agreements with providers of public cloud services, organizations are requesting guarantees for levels of performance that service providers are expected to deliver. However, in order to ensure that these service levels are met, organizations need to have independent monitoring tools in place that allow them to monitor not only actual levels of performance as experienced by business users, but also enable them to conduct root cause analysis of problems as they occur. These monitoring tools include capabilities for monitoring application response times, service availability, page load times, and ability monitor traffic during peak times.
For organizations that want to receive the full value of cloud computing services it is critical to be able to understand if any perfor­mance issues that they experience are caused by their cloud service providers, network issues, or the design of the application itself.

Ability to monitor cloud applications alongside with internal IT systems
The majority of organizations deploying cloud computing services are selecting the hybrid model, which means that they are moving some of their applications into the cloud while other applications are still being hosted on internally managed servers and delivered over the corporate network. That requires that these organizations have two different sets of capabilities, one for monitor­
performance of applications hosted outside of their corporate firewalls and one for those hosted in their data centers. It is hence important for the monitoring tool to support integrating data from possibly two restrictive networks that form part of the data center.
Having this capability in place allows organizations to ensure optimal levels of performance of business-critical applications regard­less of hosting method and in the process make their IT Operations more productive.

Business Benefits
Organizations that are using the right mix of technology solutions for monitoring the performance of applications in the cloud are more likely to enjoy the following business benefits:
Prevention and resolution of performance issues in timely manner. Organizations that have visibility into resource utilization in the cloud are more likely to make educated and timely decisions about resource allocation and, therefore, to prevent performance prob­lems before they impact their business users.
Ability to support changes in business demand. Full visibility into the performance of cloud services allows organizations to unlock the benefits of cloud computing, especially when it comes to improved flexibility of IT management. Organizations that have end-to-end visibility into the performance of cloud services and their internal infrastructure are able to make better deci­sions about adding or subtracting resources to support changes in business demand, which allows them to ensure a high level of quality of end-user experience at optimal cost.
Ability to optimize spending decisions. Organizations deploying independent tools for monitoring performance, SLA achievements, and usage of cloud services are more likely to be able to make educated decisions about the return they are getting from their investment in cloud services.

Recommendations for action
In order to have full visibility into the performance of cloud services, organizations should consider taking the following actions:
• Deploy independent tools for monitoring and validating the performance of cloud services
• Deploy tools for measuring the impact of rules for assigning cloud resources based on quality of end-user experience
• Develop the ability to compare cloud service delivery to performance of the internal environment
Make sure your monitoring tool supports a hybrid deployment architecture

ManageEngine Applications Manager’s Capabilities

ManageEngine provides capabilities that allow organizations to make educated decisions about parts of the enterprise infrastruc­ture that should be moved into the Cloud by providing performance reports . There is out-of-the-box support for monitoring ap­plication servers, database servers, servers and web servers. The support for packaged applications like Exchange, SAP and Oracle E-Business Suite further helps IT Managers to make informed decisions. Additionally, ManageEngine Applications Manager allows organizations to monitor levels of SLA achievements for cloud services and to be able to troubleshoot and resolve problems with application performance regardless of the hosting and delivery method (Internet and/or corporate networks).

The distributed architecture facilitates monitoring applications in the cloud and those present inside the corporate datacenter from the same console.

Gunakan Password Manager Pro untuk manajemen password perusahaan Anda.

Triple Recognition for Password Manager Pro this year! World’s Mightiest Enterprises Repose Trust!

 | By 
The IT divisions of three of world’s largest organizations – software maker, retail chain and virtualization platform provider have deployed Password Manager Pro to manage their privileged passwords this year!
Triple Recognition for Password Manager Pro
With 2012 fast drawing to a close, we just looked back and reflected on the year just gone by. What a fabulous year it has been! Password Manager Pro has continued its winning streak all along!
While we have kept our existing customers across the globe very happy, the product has won the business and goodwill of a great number of large enterprises as new customers. With the addition of new customers in 2012, we are thrilled to see Password Manager Pro in action with over 50,000 administrators and over 150,000 users logging in everyday to securely access and manage millions of passwords.
As in the past, enterprises of various types, sizes and domains have chosen Password Manager Pro this year. But, specifically, a great number of large enterprises, including many Fortune 500 companies have deployed Password Manager Pro to control access to their IT infrastructure in 2012. This includes the IT divisions of three of World’s largest organizations, truly giants in their respective segments having a great brand value and recognition all over the world.
World’s largest software maker, world’s largest retail chain and world’s largest virtualization platform provider and global leader in virtualization and cloud infrastructure solutions have chosen Password Manager Pro this year to manage the privileged identities in their massive IT infrastructure.
IT Managers and admins of these enterprises find Password Manager Pro to be highly easy-to-use and cost-effective; offering rock-solid security with proven encryption standards; improving operational efficiency by automatically resetting passwords across remote systems enforcing standards; providing a highly secure way to selectively share passwords; offering high-availability architecture at no extra cost. No wonder they chose Password Manager Pro!
While taking pride on delivering value to our customers, we are certainly not going to become complacent. Though our customers had been very kind and reposed trust, they have not failed to point out their concerns, comments, pain points and constructive criticisms. We have received a good number of feature requests too. We are giving sincere attention to all the feedback with a view to make the product better and win back the trust of a handful of dissatisfied customers. This is our promise for 2013.
Thank you all and wish you a secure, wonderful festive season!
ManageEngine Password Manager Pro

Bagaimana perusahaan membatasi penggunaan Internet perusahaan?

How companies can implement Internet fair use policy for corporate users?

 | By 
Internet fair use policy or acceptable use policy defines the appropriate Internet usage behavior expected from employees in their workplace. The policy aims to protect the employees as well as the companies IT infrastructure from malicious threats, inappropriate content, and corporate bandwidth draining web applications, all of which ultimately affects the productivity and competitiveness of the company as a whole.
To add tooth to the ‘written policy’ companies should have tools in place to enforce the policy in letter and in spirit. Companies use non-intrusive and real-time network security monitoring software’s likeManageEngine Firewall Analyzer to implement the Internet fair use policy in their corporate network.
With ManageEngine Firewall Analyzer, you can get detailed reports on your corporate user’s internet usage and you can verify whether it is as per the allowed policy. The reports are:
  • List of denied URLs and URL categories, each user tried to access
  • List of allowed URLs and URL categories accessed by each user
  • Sent, received and total Internet bandwidth consumed by each user
  • Protocol based Internet usage of each user
  • Separate report for each protocol
In this series of posts, we will show how companies use Firewall Analyzer to enforce Internet fair use policy.
Part 1
How to get the list of denied URLs and URL categories, each user tried to access and the list of allowed URLs and URL categories accessed by each user?
Using Firewall Analyzer network security administrators can obtain detailed reports on employees who have tried to access URLs, which are ‘denied’ as per Internet fair use policy.  They also obtain reports on users who have accessed safe or allowed URLs. See how it can be get done and how your company can benefit from it.
Steps Involved:
Create a report profile, which will generate a report, displaying the denied URLs and URL categories the user has attempted to access and the allowed URLs and URL categories the user has access.
Create a new report profile
Create custom report : Denied and Allowed URLs accessed by a user
Use the ‘Add New > Report Profile’ menu available in the sub-tab.

Wizard Screen 1 – Select Devices and Filters
Allowed Denied URL Report Profile

  1. Enter a name for this report profile. This field is mandatory.
  2. Select the devices as per your requirement.
  3. Choose any of the filters from the existing list, which meets the denied URL report condition.
  4. If there is no such filter available, add a filter. Use the ‘+ Add’ menu link.
  5. Navigate to the next wizard screen. Use the ‘Next’ button.
Add Report Filter
Filter for Allowed, Denied URL Report Profile
  1. Enter a name for this report filter. This field is mandatory. If filter name is not entered, by default ‘<Report Profile Name>_filter’ will be assigned as filter name. Select the filter type as required.
  2. Select ‘Include the following Users’ option and enter the user name in the text box.  Use the ‘Add >>’ button to add the users for whom report should be filtered. Remove the user using ‘Remove’ button.
  3. Use the ‘Finish’ button to complete the report filter creation.
Wizard Screen 2 – Select Report Type and Schedule
Allowed Denied URL Report Profile
  1. Scroll down and choose the ‘URL Report’ from the ‘Available Reports’ list, which meets the denied URL report condition.
  2. Optionally, if you want to modify the definition of the report type, use the ‘+ Add’ menu link.
  3. Optionally, you can schedule the report generation, by default it is generated only once
  4. The report can be emailed to the concerned Administrator if you enable the ‘Email the report’ option and configure the ‘Mail Server’ and recipients email IDs.
  5. Use the ‘Preview’ button to have the report preview and Use the ‘Save’ button to save the report profile.
View Report
The denied and allowed URL report for user ‘David’ is given below:
Allowed Denied URL Report for user 'David'Note: This report captures the accessed URLs and denied URLs attempted by one specific user. To get report for denied URLs attempted by multiple users, use the ‘Advanced Search’ option.
In our subsequent posts we will cover few of the other methods of implementing Internet fair use policy using Firewall Analyzer.

Tuesday, December 25, 2012

Masa depan Sales Technology

The Future of Sales Technology
Salespeople are always the early adopters. Here's where they (and you) are heading.

THE FUTURE OF SALES? It is now possible to scan the brains of people viewing sales presentations to see whether or not they’re being convinced.
For the past two decades, salespeople have been the early adopters of technology that's later permeated the rest of the business world. Salespeople, for example, were the first to embrace smartphones and CRM was the first viable "cloud-based" application.

Therefore, if you want to know how the general business public will be using computers in the future, you'd best understand the trends that are already taking place within forward-looking sales teams.

To this end, I've been working with sales research pioneer Howard Stevens on a book about the future of selling. We have just completed the chapter on sales technology which (along with other chapters) is available for free on the Chally website (HERE).

1. Cold calling will become impossible.

Today, all companies use some form of voice mail, which provides an automatic and relentless gatekeeper. While sales technology firms have come up with technologies (like autodialers) to overcome these barriers, many decision-makers (especially young ones) no longer use voice mail and only take calls from recognized numbers.

At the same time, there's been an increase in government regulation of cold calling.  Member states of the European Union, for instance, are now required to have laws that prohibit general cold calling. While cold calling remains legal in the United States, the FTC's "Do Not Call List" has greatly curbed unsolicited telemarketing.

The combination of these two factors is already making cold calling less effective at lead generation. Because of this, we see salespeople already migrating to other lead generation methods, such as developing customer relationships using a combination social media and other "known-person to known-person" communication.

2. Tablets will replace laptops (and maybe desktops).

When the iPad was originally released, Walt Mossberg of The Wall Street Journal called it a "pretty close" laptop killer. There are now growing signs that that "pretty close" was an understatement. For example, a recent study revealed that 89% of iPad owners bring their iPad when traveling and more than one of three leave their laptop at home.

Within 90 days of its introduction in early 2010, the iPad managed to penetrate 50 percent of Fortune 100 companies and by 2011, iPad sales were eating into PC sales. Microsoft recent announcements identifying its Surface product as key to the company's future indicates the Microsoft takes the tablet threat seriously indeed.

While it is currently too soon to tell for certain, we remain deeply skeptical of the ability of Microsoft's Surface tablet to establish itself as a third alternative in the tablet market. While there's no question that Windows machines will remain a fixture in the business world for many years to come, we feel the days of the dominance of the desktop and laptop inside sales teams is drawing to a close.

3. Sales management will become more data-driven.

Sales management has always been data-driven; few corporate metrics are more visible than sales figures! However, because sales revenue measures after-the-fact result, sales executives don't know whether their strategies are actually responsible for revenue increases.

As a result, most sales managers rely primarily on intuition and tradition when making important decisions. For example, companies spend billions of dollars every year on sales training that attempt to "clone" the winning behaviors of top salespeople, even though there's no data to show that such training improves overall sales performance.

Increased data gathering through CRM and survey vehicles is now making it possible to gather and analyze demographic and performance data about sales personnel. This scientific process often reveal that the "intuitive" truths about sales management are dead wrong.

Top salespeople, for example, always build their success on pre-existing natural talent that tends to be unusual in the general population. A data-driven approach to sales management thus allows companies to re-target sales training to making average performers slightly better rather than wasting time trying to turn them into stars.

4. CRM will become invisible.

Historically, CRM implementations have had a failure rate as high as 70%, according to some studies. Experts believe that such failures have been largely due to a mismatch between the needs of sales management (i.e. control over the sales process) and the needs of the salespeople (i.e. control over their customer relationships.)

However, CRM systems are gradually becoming "smarter" in the way that they use existing information, greatly reduce the amount of clerical work required of the sales team.  Tablets and smartphones will make CRM both less burdensome and more customizable and therefore more attractive to sales teams.

We believe that we're on the brink of a sales technology environment where the accumulation of customer data becomes automatic and CRM thus becomes an more or less invisible part of the overall computing environment, in the same way that Ethernet and email are now simply assumed to be part of the general business tool kit.

5. Interactive video will become ubiquitous.

Video conferencing has been around for over two decades, but has not yet played much of a role in sales environments. However, we believe that this will change over the next decade, and that video interaction will permeate the sales environment, primarily due to the increase use of smartphones and tablets in sales environments.

Fueled by online applications like Skype, the video conferencing marketing has been growing rapidly and the integration of video conferencing into iPhones, iPad and other table devices has turned videoconferencing from a specialized application to a preferred way for people (especially young people) to communicate.

We predict increased usage of video conferencing for holding online events, creating collaborative sales proposals, sales training, product demonstrations and ongoing customer service. Overall, we believe that interactive video is likely to largely replace in-person meetings for all but the biggest ticket sales items.