Data Center is our focus

We help to build, access and manage your datacenter and server rooms

Structure Cabling

We help structure your cabling, Fiber Optic, UTP, STP and Electrical.

Get ready to the #Cloud

Start your Hyper Converged Infrastructure.

Monitor your infrastructures

Monitor your hardware, software, network (ITOM), maintain your ITSM service .

Our Great People

Great team to support happy customers.

Saturday, January 19, 2013

5 alasan mengapa implementasi DCIM gagal

Five Reasons Why a DCiM Install Fails

⁠Matt Lane⁠ ⁠November 15, 2012⁠ ⁠1 Comment »⁠

Data center infrastructure management (DCiM) systems have quickly become an integral part of the data center industry. Developing a holistic view of critical points in a data center offers benefits such as more-efficient power usage, downtime prevention, process automation and more. Installing a DCiM solution can be challenging, however. With time constraints, miscommunication, poor planning, and vendors overpromising and under-delivering, there are a number of ways a DCiM install can fail. Data centers shouldn't miss the opportunity to reap the benefits provided by a DCiM solution because of a poorly managed install. In this article, Matt Lane, President of Geist DCiM, outlines some of the reasons why DCiM installs fail and how to prevent them.

1. Lack of Planning

Users should start with their specific business goals in mind. In other words, define what your business needs are first, and then research possible solutions. All too often we see users who want the benefits of the DCiM solution but have not taken time to define their specific needs. To gain the benefits of a DCiM solution, users must have specific goals in mind so they can measure success. A DCiM solution should ultimately be a direct reflection of your business's unique set of needs—not a generic one-solution-fits-all approach.

Once users have defined their goals, they need to work with a DCiM provider to fully understand the process behind a DCiM install. Oftentimes, users play a key role in the physical installation of the solution. The site must be prepared and ready for the install; cables have to be run, floor plans have to be created, personnel have to be trained and so on. Users hear a lot about DCiM but don't truly understand what it takes to implement a successful system. Failure to understand how to implement a DCiM system and how to effectively use it in the long term is a big reason for failure.

To help prevent these failures, begin by asking these questions:

What do I need to accomplish my business goals? ( For example, energy savings, coordinated data, alarming, reporting and so on.)

I need X, Y and Z to run my business more effectively. How can your products help me? (Don't start with "What can your product do for me?")

What is my role in the installation process?

What needs to be accomplished before the actual installation?

What is the turnaround time for implementation?

What training comes with this system?

What long-term support and associated costs come with this solution?

2. Misrepresentation by the Vendor

"Overpromise and under-deliver" seems to be a common standard in the DCiM world. Many technologies are new to the market and are promising features that are not yet proven or in some cases even developed. If a company purports to do custom integrations or modules, make sure it has a standard process in place for handling these requests. Requesting details from previously implemented custom modules demonstrates a history of successfully meeting custom demands. In addition, many DCiM vendors are understaffed and unable to completely deliver on what the sales team has promised. Meeting with project managers and engineers to discuss your unique needs before beginning the project will help give you an idea of the vendor's capacity and capabilities.

Another vendor misrepresentation that we see in the real-time data space occurs when companies claim to be vendor neutral. When it gets right down to it, though, many are really proposing to add compatibility hardware to legacy equipment—which adds a lot of cost. It is important for users to do their homework on exactly how a vendor intends to integrate with existing systems and how that affects the cost of the total project.

Items to consider:

Find out what comes "standard" with the system and compare it to your predefined business needs.

If a need is not met from the "standard list," does the vendor offer custom options tailored to your specific business needs? Does it have a standard process for meeting custom needs?

Find out what services are included with the system.

How does the vendor plan to integrate the DCiM system with existing equipment?

Does adding hardware also lock you into that vendor or mean that it will be extremely expensive to ever move away?

3. Ownership of the Process

We see a lot of companies outsource their DCiM implementation to a third party. Some of these third-party companies do a fantastic job, but others try to accomplish the work as a secondary item to their primary goal (e.g., some other software system, hardware sales and implementation, utility rebates and so on). As a secondary effort, the goals and process of the project tend to be less focused, and at times, the delivery of the product is not what was expected by the end user.

Users should have a dedicated owner of the project that can communicate goals, answer questions and understand the scope of what is required. A long-term owner of the system is important and helps provide visibility to business-process improvement, updates and upgrades to the system, and showing how the system will benefit the company on a continual basis. In other words, users need a true partner who will be involved in every step of the installation process—from design to implementation to upkeep.

Items to discuss with possible vendors:

What is your project flow? Take me from the start of the project through completion and upkeep.

Are there other vendors that will be involved with this install? If so, who is accountable if any issues arise during their portions of the project?

If or when something goes wrong, with whom will I discuss this?

4. Misconceptions of Upkeep Costs

Even if you use a turnkey installation provider and implementation vendor, DCiM requires dedicated, assigned resources to be successful. Too often we see a system that gets fully installed but never quite accomplishes the goals that were set because there are no user resources allocated to manage and maintain it. A DCiM system may streamline certain tasks or thwart a potential outage, but in the end, users must still implement a process for change that will help accomplish the business goals. Simply installing a DCiM system doesn't mean that your goals will magically be met. Users must allocate the proper personnel resources to implement a process similar to "measure, analyze, improve, control," which continually loops to create a successful installation.

Another misconception of upkeep costs is underestimating total cost of ownership (TCO). Many vendors implement a low upfront-cost model that helps users get into their DCiM package with little upfront cost. Many businesses fail to realize or examine the ongoing costs for support, maintenance, system adds, upgrades and so on. We install many systems that are replacing older ones because when it comes time for some type of maintenance activity, the proposed cost by the vendor is significantly higher than the original cost to completely install the system. Underestimating TCO leaves many systems in a predated, nonfunctional state.

Upkeep costs to consider:

What personnel do I need to keep up my system?

What processes will I need to implement to ensure the DCiM solution meets my business goals?

What kind of training will I need to provide personnel on the new system?

What does a support contract cost and what does it include?

How much will maintenance cost for my DCiM System?

How much does an upgrade cost? If an upgrade is "free," what type of install services am I required to pay for?

5. Data Overload

Too often DCiM systems collect so much data that the information you need is difficult to access. Projects with hundreds of thousands of data points are fine if the data can be correlated and communicated well within the DCiM system. Many users think they should collect every point of information they can—which can work in instances where the system is designed to intuitively communicate the important points.

Typically, however, too much point collection makes the DCiM system cumbersome and hard to decipher. To properly track, measure and manage business goals associated with a DCiM solution, users need tools that make interpreting the data meaningful, intuitive and quick. Dashboard views, user-defined points, trending abilities, alarming and automated reporting all help sort and organize the data collected to provide real value from the DCiM system.

Items to consider:

What data points do I need to collect in order to realize my business goals?

How can I view this data quickly and intuitively?

Can I calculate data points to see my key performance indicators, PUE, DCiE and other data points that are relevant to my business goals?

Can I trend and report important data?

Can I set rules that will notify me immediately if there is an urgent situation?

The benefits of implementing a DCiM system are vast, and a failed installation process should not keep data centers from experiencing the rewards. Making a well-informed decision by employing a thorough implementation plan for a DCiM system will help avoid these common pitfalls.

About the Author

As a co-creator of the Environet DCiM solution, Matt Lane has over a decade of experience working in data center monitoring and product development. He brings a wide range of experience as an entrepreneur, business owner and manager. He is currently the President of Geist's DCiM division, which provides custom software for data center infrastructure management.
build-access-manage at

Apa sih Data Center Infrastructure Management (DCIM) ?

What is Data Center Infrastructure Management?

Data Center Infrastructure Management (DCIM) means many things to many people. It is a relatively young term that represents an emerging class of IT physical infrastructure solutions, one that has already generated enormous market acceptance. Gartner predicts that it will quickly become mainstream, growing from 1% penetration of data centers in 2010 to 60% in 2014 (DCIM: Going Beyond IT, Gartner Research, March 2010).

Why is DCIM taking the market with such force?

If you ask the executives who lie awake at night worrying about the tens of thousands of IT assets under their supervision, they’ll explain it with one word: “Help!”
Today’s IT decision-makers are starving for the information, insight, and command-and-control that a true Data Center Infrastructure Management solution offers.
They need to be able to see, understand, manage, and optimize the myriad of complex interrelationships that drive the modern data center – one of the most complex entities on earth. They need holistic information and visibility into the entire IT infrastructure, information that that is instantly meaningful and actionable. (Fragmented device-level data is no longer of much use to them.)
To paraphrase one of Gartner’s early definitions: Data Center Infrastructure Management integrates facets of system management with building management and energy management, with a focus on IT assets and the physical infrastructure needed to support them.

So what does this all really mean?

Data Center Infrastructure Management – when it’s the right solution from the right vendor – can optimize the performance, efficiency, and business value of IT physical infrastructure and keep it seamlessly aligned with the needs of the business.
DCIM can help decision-makers:
  • Locate, visualize, and manage all of their physical assets within an integrated “single pane” view of the entire infrastructure
  • Automate the commissioning of new equipment, reducing the need for error-prone, time-consuming manual tasks like walking the floor to confirm what can go where
  • Automate capacity planning with unparalleled forecasting capabilities, including the use of “what if” scenarios
  • Reduce energy consumption, energy costs, and carbon footprint – save the planet while you’re saving potentially mlllions
  • Align IT to the needs of the business – and maintain that alignment, no matter how radically those business requirements may change and grow
But not all DCIM vendors are created equal. IT decision-makers must make a careful evaluation of today’s vendors, products, and promises. They must strip away the misconceptions. Here are three common myths about DCIM.

Myth vs. Reality

Myth: DCIM is about the data center.


A true Data Center Infrastructure Management solution can scale to manage hundreds of thousands – if not millions – of assets sitting in the world’s largest global IT infrastructure environments. All of the servers, switches, blades, etc. and the myriad of facilities and building systems that constitute the physical infrastructure. Not just in the data center, but across the entire enterprise. Because the walls between IT and facilities are coming down. If the vendor cannot scale to reliably meet the challenge of convergence at the enterprise level – handcuffed by product limitations or lack of experience – then it is not a complete DCIM vendor.

Myth: DCIM is about monitoring.


Executives today cannot afford to mistake monitoring for managing. Monitoring energy usage at the device level gives you mere data – a single-dimensional perspective on a specific device at a specific point in time, without context. The data must be deciphered or assimilated so you can make sense of it.
Managing energy usage across the power chain requires context-rich information about all of the interrelationships that exist between assets – holistic information that is immediately meaningful and actionable, and lets you track power all the way from the transformer on the street down to every device on every rack.
This insight is best found in an interactive, navigable 3D environment that validates the axiom, “a picture’s worth 1,000 words.” Humans are innately visual creatures, making interactive 3D visualization the perfect environment for presenting holistic information that leads to swift, insightful decision-making. Interactive 3D visualization is core to Data Center Infrastructure Management because it’s the most effective way for IT executives to wrap their arms around the incredible complexity of the modern data center. Conventional 2D spreadsheets or static 3D images, for example, cannot possibly represent the web of interrelationships that a power chain encompasses.

Myth: DCIM is about power.


In its early days, Data Center Infrastructure Management was born of the need to understand and reduce energy consumption. “DCIM is an offshoot of the green IT initiative and originally was designed to do basic energy monitoring, reporting and management at the data centre level,” says analyst David Cappuccio of Gartner. This statement may be true, but DCIM with interactive 3D visualization has evolved far beyond power.
As a solution that integrates IT physical infrastructure management, facilities management, and systems management, DCIM is transforming how the IT ecosystem is seen and managed. A true DCIM solution is a game-changer across a spectrum of challenges:
  • Energy management. Reducing energy consumption and costs – priority #1 in data centers worldwide.
  • Asset management. Optimizing the utilization of assets throughout their lifecycles, from acquisition to decommissioning.
  • Availability management. Proactively identifying the impact of failures and maintenance outages on data center service levels.
  • Risk management. Establishing controls and records-keeping to meet regulatory requirements such as Sarbanes-Oxley, HIPAA, etc.
  • Service management. Monitoring the satisfaction of service requests to identify gaps and implement corrective actions where needed.
  • Supply chain management. Improving coordination of equipment delivery and disposal, resolving bottlenecks that can increase operating costs.
  • IT automation. Automating the planning and execution of infrastructure service requests, eliminating manual steps and speeding time-to-delivery.

The bottom line

The physical layer has increasingly become the single point of IT operational dependency in a world of increasing convergence. DCIM is the natural evolution of this process. The physical layer is now being treated with the same level of priority as the logical layer. Investments in managing the logical layer are shifting to investments in managing the physical layer.
As more than a few executives have proclaimed – “it’s about time

Penetration Test of ManageEngine Password Manager Pro

Penetration Test of ManageEngine Password Manager Pro

The application ManageEngine Password Manager Pro was subjected to a penetration test by //SEIBERT/MEDIA to test the security of this software. After the problems noted in version 6.3 (build 6303) Windows have been corrected, this application will receive a Pentest Certificate "Silver" from //SEIBERT/MEDIA. We confirm that ManageEngine Password Manager Pro is a secure software.

Management Summary

Download this video (MP4, 15 MB)
You are interested in the security of ManageEngine Password Manager Pro. So are we! //SEIBERT/MEDIA is responsible for testing the security of ManageEngine Password Manager Pro regularly. This is an ongoing process, as the software is developed heavily by the vendor. From all that we know, ManageEngine Password Manager Pro is secure. We have found security issues in the past. And they have been fixed by the vendor instantly. No software is without bugs or holes.
Security comes from a process, that is consistent, profound and continous and a software vendor, that takes care of acting on issues fast. We try to achieve all this for you as a user of ManageEngine Password Manager Pro.

Overview of issues and needed actions

This is an overview of all security checks for which a defect or issue was found. All checks are divided into six categories of risk:
CategoryTitleDescriptionIssues foundFixed in newer versionVerified by //SEIBERT/MEDIA
0InformationNo risk, informative0
1HintA hint for a defect0
2RecommendationRecommendation for optimization2verification due 
3IssueIssue which needs to be corrected5verification due 
4CriticalHigh risk5verification due 
5SevereVery high risk1verification due 
The penetration test included the following components of the web application and the system configuration in version 6.3 (build 6303) Windows.

Module: System

Verification of SSL-TLS security

SSL/TLS is a protocol which resides in 6th level of the OSI stack. It is used for trusted and encrypted communication over unsecure networks.
This test was intended to verify the overall SSL/TLS configuration as well as the offered encryption methods and lengths.

Notes on system-side application configuration

This test describes the findings during the assessment on the system side which can't be assigned to a specific issue.

Old software versions

Old or non-patched software often is a serious security issue. Through a vulnerability, even an inexperienced attacker ('script kiddie') could gain root privileges or could harm the system in many any other ways, e.g. by executing a denial of service (DoS) attack, manipulate files and other.
This test checked for old software versions and its known vulnerabilities.

World-writable and world-readable critical files and folders

World-writable and world-readable files and folders can be a serious security issue. An attacker could add or modify files and by this compromise the security of the service and system or could acccess sensitive data with normal user privileges. It could also be possible that an attacker can access these files through another vulnerable service or system component.
In this test, the installation of PMP was checked for such files and folders.

Database configuration and files

A database management system (DBMS) often is the most crucial part of an application, because it holds most or all data. Customer- and useraccounts, bank accounts or product- and payment information must be stored securely so that only privileged user can access the data.
This test checked for incorrect database configuration and public or open database accounts.


It is common to enable logging for trouble- and performance analysis as well as access statistics. Especially debug log files often contain sensitive information like usernames, passwords and other information.
For this, all logfiles of the application installation are checked for such data.

Client plugins and addons

Client plugins can enhance and extend the functionality of a web application and often allow stronger interaction between the client's computer and the web application.
This test checked for security issues in the plugins implementation.

Module: Web Application

File upload checks

File uploads are common in today's web applications. These are often used to provide users with an option to attach various files in the application. Insufficent server-side checks can be a serious security issue, as an attacker could upload malicious files like HTML or Javascript or could place other files outside the application root.
This test checked for various common security issues like
  • Upload of HTML, Javascript and other potential malicious files
  • Handling of wrong MIME-Type
  • Handling of null byte chars
  • Header manipulation
  • Path traversal

Forgot-Password function

Most web applications allow users to reset their password if they have forgotten it, usually by sending them a password reset email and/or by asking them to answer one or more "security questions".
In this test, we checked that this function is properly implemented and that it does not introduce any flaw in the authentication scheme. We also checked whether the application allows the user to store the password in the browser ("remember password" function) or if the application allows autocomplete of Password fields.

Cookie attributes

The use of Session Cookies is the most common method for storing authentication information for a defined period a session after successful authentication. It is therefor crucial that these are protected with correct HTTP flags.
This test checked, which flags and values are set. These are:
  • Secure-Flag: Only permit the transmission over an encrypted connection. Otherwise it may be possible to read the Cookie values in cleartext.
  • httpOnly-Flag: Disable client side access to Cookies. This prevents, amongst other, the most common XSS attacks.
  • Domain-Flag: Scope of application.

Cross site request forgery (CSRF)

Cross site request forgery (CSRF) is an attack which forces an end user to execute unwanted actions on a web application in which he/she is currently authenticated. With a little help of social engineering (like sending a link via email/chat), an attacker may force the users of a web application to execute actions of the attacker's choosing. A successful CSRF exploit can compromise end user data and operation in case of normal user. If the targeted end user is the administrator account, it can compromise the entire web application.
This test checked for such CSRF flaws in the application.

Reflected Cross-Site Scripting (Type 1 XSS)

Cross-Site Scripting attacks are a type of injection problem, in which malicious scripts are injected into the otherwise benign and trusted pages. Cross-site scripting (XSS) attacks occur when an attacker uses a web application to send malicious code, generally in the form of a browser side script, to a different end user.
Reflected attacks are those where the injected code is reflected off the web server, such as in an error message, search result, or any other response that includes some or all of the input sent to the server as part of the request. Reflected attacks are delivered to victims via another route, such as in an email message, or on some other web server. When a user is tricked into clicking on a malicious link or submitting a specially crafted form, the injected code travels to the vulnerable web server, which reflects the attack back to the user's browser. The browser then executes the code because it came from a "trusted" server.
In this test the application was thoroughly checked for such reflected script vulnerabilities to disclose erroneous or incomplete protection measurements.

Persistent Cross-Site Scripting (Type 2 XSS)

Cross-Site Scripting attacks are a type of injection problem, in which malicious scripts are injected into the otherwise benign and trusted pages. Cross-site scripting (XSS) attacks occur when an attacker uses a web application to send malicious code, generally in the form of a browser side script, to a different end user.
Stored attacks are those where the injected code is permanently stored on the target servers, such as in a database, in a message forum, visitor log, comment field, etc. The victim then retrieves the malicious script from the server when it requests the stored information.
In this test the application was thoroughly checked for such stored script vulnerabilities to disclose erroneous or incomplete protection measurements.

Build-Access-Manage with

An Introduction to High Perf Reporting Engine from NetFlow Analyzer

When it comes to traffic reporting and network troubleshooting to find bottlenecks or bandwidth spikes, complete port level analysis of raw flows is required and this will help to find out the cause.

NetFlow Analyzer troubleshooting report with Sub Minute visibility helps to identify Network Spike, Bandwidth Saturation etc. The Troubleshooting Report gives complete port level information for the selected time as it is generated completely from raw NetFlow data. Click here to know more about data storage pattern in NetFlow Analyzer.

In older version, raw data can be stored for maximum of 1 Month and lets user to drill down to each an every flow for identifying bandwidth spikes only for 30 Days.

Yes, 30 Days of raw storage is not sufficient when it comes to detail analysis for entire year to do capacity planning or Government Auditing. Certain countries mandate storing raw traffic flow data for more than 6 months.

When it comes to ISP or MSP, they want all their customer data to be stored for more than 3 months for accurate billing purpose.

All these things in mind, we NetFlow Analyzer team bought a new feature High Perf Reporting Engine.

What is this High Perf Reporting Engine?

The HighPerf Reporting Engine of ManageEngine NetFlow Analyzer is a highly scalable database for raw storage alone, it can store raw data for more than 6 Months.

One can wonder storage of all the flows from devices for more than 6 Months might take huge disk space and report generation might be sluggish when looking for longer time period. The HighPerf Reporting Engine uses columnar database, which is best suited for instantaneous report generation for longer time period and compressed storage.

Following are the advantages offered by HighPerf Reporting Engine of NetFlow Analyzer.

  • Increased raw data storage capacity
  • Instant Report Generation
  • Columnar Database
  • Shortened look-up time
  • Saves Troubleshooting Time
  • Improved data compression
  • Better Capacity Planning
  • Insightful reports
  • Better bandwidth management

How to Deploy this HighPerf Reporting Engine?

From this latest version, NetFlow Analyzer by default comes with PostgreSQL which can be used to store 30 days of raw data and life long aggregated data (Used for Historical reporting).

This HighPerf Reporting Engine comes as plug-in which has to be installed on top of NetFlow Analyzer installation or you can download the complete package which contains this plug-In.

It is also possible to host the HighPerf database engine on a remote host and connect the NetFlow Analyzer to this database.

We will go through in detail about the installation of HighPerf Reporting engine in our next blog.


Praveen Kumar

NetFlow Analyzer Technical Team

Download | Interactive Demo  | Twitter | Customers

Sunday, January 13, 2013

Mengubah karier dan prosesnya..

It is NEVER too late for a career change! Sure, you might not have direct experience in a certain industry or job, but you need to prove to any hiring manager that your existing skills are, in fact, transferable skills.

If you're debating about making a career change, don't be afraid. Even if a career switch later in life seems like a completely radical change with many possible consequences attached to it, you should still go for it if it's something you really want to do. My best advice is to set up a plan before making the dive. A large-scale transition will not happen overnight, and this is why it's important to ensure you have a "plan of attack."

Also, make sure your career change is realistic. Although I encourage everyone to follow their dreams, you also need to stay realistic. If your dream is to become a pilot, but you've worked in banking for the last 15 years, the chances of you becoming a pilot are a lot harder (but not impossible)! Also remember to be flexible. You are making a career change that could involve a lower salary or relocation. These are some of the sacrifices you could be asked to make in the short term.

When you begin applying for new roles, you need to ensure your resume is targeted toward this new job. Obviously you are not going to have direct experience, so it's important to highlight not only your current skills and achievements, but also (and most importantly), that you are able to adapt your skills for this new job.

In making the career change, your skills are by far your best selling point. Many skills that you use on a day to day basis (such as leading, managing, liaising and communicating, for example) are all transferable skills that you can use to prove to a hiring manager that you are right for a particular job.

5 point plan to making a career change:

  1. Make sure of your reasons for wanting a career change. One bad day at work or hating your boss do not suggest you want to change careers

  2. Brainstorming – Sit down and brainstorm ideas of the type of industry/job you really want to do

  3. Planning – Set out a plan to follow. Make it realistic. Remember your career change won't happen overnight.

    Realistically, it can take about 6-12 months. Don't quit your job on day 1. Included in planning is financial planning. How much is this career change going to cost you? How much do you plan to get paid? You need to know these answers!

  4. Networking – Talk to friends, speak to recruitment agents and sign up to online networking sites

  5. Executing your plan. Speak to an expert in regards to interviewing, resume writing and cover letter writing.

    Apply directly, and begin to follow the steps of your plan.

    Career management refers to the planning, supervising, controlling, handling, coping and administrating one's professional life. It comprehensively covers a detailed view of what you want to be, where you want to go, how you will get there and ultimately how long you intend to stay.

    All the answers are directly related to one's personal goals and targets. Being able to handle changes in your career will best enable you to avoid mistakes of the past, prepare a confident approach to the present and a implement a positive direction for the future. Overall, managing your career will help maintain and develop your professional growth, development and direction.

    When should I begin to manage my career?

    Successful career management can start as early as the first day you walk into school or college. One should clearly identify their goals before enrolling in a particular degree or course and preparing for a lifelong career. (This saves a lot of money and time later on down the track!)

    Be specific with what are you good at and what you enjoy doing; most importantly what you can see yourself doing every day going forward. Being able to answer these questions will help you in understanding yourself better and what areas you are most likely to succeed.

    If you find that you have made a mistake don't panic. Exhaust your options, understand the value added skills that you have and how best you can utilise these existing skills.

    Don't be afraid to ask questions. Ask yourself if you are capable of performing the task or if you see yourself progressing in a certain area. If the answer is yes, then begin your quest to achieving your targets. Never forget to network and seek out as many people and opinions as possible. You just never know where the next door will open.

    How long does career management last for?

    Career management is a lifelong exercise. Balancing your work and social life is a juggling act. It is not just confined to one period in your life or a particular profession. In life many things change so don't be afraid to change with the times. It is all about adaptability and learning.

    The ability to learn from every setback will make you smarter in making your next career move. The employment market may seem crowded and not promising, but being open to change will help you survive during those dark months. The changing times are not moments of despair, but rather moments of opportunity. 


Build-Access-Manage with

Yang Anda lakukan sebelum minta kenaikan gaji

Before you ask for a raise, the most important thing you need to remember is that you need a reason for asking for one. An employer is not just going to hand out extra money to you because they like you – you need to give them a compelling reason to do so.

Basically, you need to give them something that exemplifies your hard work and that shows you are a positive asset to the company. Think of a few ways for doing so – below are just a few examples:

Arriving on time on a consistent basis:

No employer likes workers who show up late – ever. If you consistently come to work early or on time, your boss will definitely take notice of this and will appreciate your punctuality. You've already given yourself a head start.

Taking on an extra work load:

Volunteering to do more than what is expected of you helps to build your reputation within the company. You will be recognized as a leader, as someone willing to help out and as someone who can be counted on. You may also gain some valuable experience within other departments, and extra knowledge never hurts. Employers love this type of employee, and will be more likely to go the extra mile to keep them on board.

Keeping track of your performance:

There is nothing better than being able to show concrete examples of how you have benefited the company. Have sales dramatically increased since you came on board? Do you consistently meet or exceed your targets?

Of course, some people argue that taking on an extra work load or working overtime is a negative because you allow the company to take advantage of you. Well, like it or not, this is how the world works. If you want to stay in the same position year after year, do the minimum, but if you want to move up, putting in that extra effort will be required of you. Raises are not free handouts for everyone – they are reserved for the ones who put in the extra effort. 

Build-Access-Manage with