Data Center is our focus

We help to build, access and manage your datacenter and server rooms

Structure Cabling

We help structure your cabling, Fiber Optic, UTP, STP and Electrical.

Get ready to the #Cloud

Start your Hyper Converged Infrastructure.

Monitor your infrastructures

Monitor your hardware, software, network (ITOM), maintain your ITSM service .

Our Great People

Great team to support happy customers.

Saturday, March 16, 2013

Pasar perangkat peripheral mendukung pasar SMB

Peripherals market to drive Indonesia SMB hardware spend

Improving cost efficiencies is high up the agenda of Indonesian small and medium business (SMB) and enterprises and this will be driving the changes within the peripherals market.

Vendors offering color copier-based multi-function printers (MFPs) will note an increase in their revenue in 2012 as its acceptance by businesses heighten due to a focus from SMBs in driving up longer-term cost efficiencies against shorter-term cost reductions. In 2011, color copier-based MFP took 17.8% of total market share compared to 13.5% a year ago. 

International Data Corporation (IDC) predicts that the hard copy peripherals (HCP) market in Indonesia will continue to grow as positive economic news will pave the way for SMBs and Enterprises to spend more on hardware.

Sharing insights on the "IDC Indonesia ICT 2012 Top 10 Predictions" report, Sudev Bangah, Senior Research Manager, IDC Asia/Pacific said, "Indonesia will move into a transformative phase where ICT will play a major role in enabling traditional economies and boosting economic growth in the society. The country has to rapidly adjust its ICT infrastructure to accommodate the increasing influx of foreign investments and this will have a positive impact on the overall ICT market.” 

Indonesia's economic growth is expected to top all other ASEAN countries in 2012. IT spending is forecasted to achieve US$12.9 billion by the end of year, showing an 18% year-on-year (YoY) growth. 

Drawing from the latest IDC research and internal brainstorming sessions amongst IDC's regional and country analysts, here are some more key ICT predictions in 2012 for Indonesia. 

One. Tops in 2012 for stability and growth 

ICT budgets from end-user organizations are growing in parallel with the economic growth expected in the nation and this leads IDC to be upbeat about the nation’s ICT spending in 2012. Discussions evolving around datacenters, managed services, social media, cloud computing and mobility has garnered heavy interest from both local and global organizations. Therefore, IDC believes that Indonesia is the key Southeast Asia market that IT vendors will focus their attention on in the year ahead.

Two. Cloud Computing – Moving from awareness to understanding

In 2011, IDC found that more than 50% of end-user organizations in Indonesia were either actively searching and/or planning to adopt publicCloud services within the next 12 to 24 months. In 2012, IDC anticipates that cumulative efforts from both local and foreign providers will garner more interest in this emerging technology. IDC expects a shift in the thought process of Indonesia end user organizations in 2012; companies will begin to better understand the value proposition and mechanics of cloud computing. 

Three. Telecommunications spending continue surge

While this has been a common theme in IDC's predictions for the past year, its relevancy is now heightened as IDC is witnessing a surging demand for network coverage across all major cities in Indonesia. In light of this, telecommunications operators are beginning to devise means to upstage competition and ultimately win over their target group. 

As a showcase for how much demand is heightening at this juncture, IDC estimates that the end of 2011 brought forth 43 million mobile shipments into Indonesia and this is aggressively driving and heightening the utilization of both voice and data telecommunications services. Fixed data services and broadband are also continuing to grow, driven primarily by residential and business segments. 

Four. 2012 – Mobile broadband explodes

Affordable smartphones and USB modem dongles are beginning to be ubiquitous in many cities in Indonesia. The current view of the mobile broadband ecosystem is one which is competitive, and where service providers are forced to offer affordable broadband packages in order to compete. With the prices of smartphones coming down to around the US$100 mark, and USB dongles being available for an average price of US$25, IDC is witnessing all the makings of a society with a high propensity to adopt mobile broadband due to the demand in content, social media applications and connectivity.

Five. Further movement from feature to smartphones

On par with the global and regional shift towards "smarter" devices, IDC is witnessing a transition occurring in Indonesia where affordable smartphones are making their way prominently into the market. These phones are capturing the attention of a demographic group that is beginning to swap in their archaic feature phones for one which boasts a faster and more sophisticated user experience. 

The mobile phone market in Indonesia is fairly saturated, ranging from global brands to top local brands that have successfully made an impact on the middle- to lower- class income group that demands smartphones at a more affordable price. 

Six. Social media as a marketing tool

Based on government released statistics, there are approximately 35 million Internet users in Indonesia, a figure which is expected to reach in excess of 95 million by 2015. What is even more extraordinary is that Facebook claims that there is an estimated 32 million registered Indonesia Facebook users at the start of 2011. Indonesia is also cited as one of the countries with the highest penetration of Twitter users globally. 

This translates to some golden opportunities for marketers as there appears to be a huge untapped marketplace where potential seems limitless and outreach has no boundaries.

Seven. IPTV to gain further traction

PT Telkom launched its IPTV services in Indonesia in mid 2011. Due to the wide range of features available on IPTV including on-demand screening (pause/play) and recording functions, the service provider captivated an audience who were swung by its novelty; IPTV allowed them to "control" the manner in which they watch television. IDC predicts that IPTV subscribers will double in 2012. 

Eight. The Increasing Tablet scene

Indonesia has emerged as one the largest markets in Southeast Asia nations in the consumption of media tablets. With the rapid development in Indonesia and the higher drive for inter-connectivity due to a surge in interest on social networking websites, Indonesia looks poised to fully usher in an era of media tablets, replacing mini netbooks which rose to prominence in the same market barely three years ago. 

Nine. Towards an advanced information society

In the past three years, the government has positioned ICT at the top of the agenda within its transformative plans and earmarked ICT as a key enabler in aiding its traditional economies to reach a higher plateau, as well as a means to reduce poverty by opening up a new sector to create jobs and opportunities.

Opmanager bisa untuk manajemen virtual server Anda

 A data center consolidation is under way at global biotechnology company Vertex Pharmaceuticals Inc., and virtualization and virtualization management are playing a key role. Chris Pray, senior engineer for global information systems at Cambridge, Mass.-based Vertex, is overseeing this consolidation and steering the company down its virtualization path. He recently spoke with about virtualization management, including must-have policies, tools and procedures that will ensure the benefits the company has gained through virtualization are not lost as the environment quickly grows and shifts. How is your data center design strategy changing?
Pray: Up until about three months ago, we owned four data centers on-site in Vertex-owned buildings: one [data center] in San Diego, two in Cambridge and one in the U.K. We just finished consolidating the two in Cambridge into a colocation [he could not disclose the name of the colocation provider]. With that most recent [colocation], we moved about 450 servers, virtual and physical, along with a bunch of networking gear and backbone. The one back in July was a lot larger, probably double the size. We also have plans to colocate the sites in San Diego and the U.K.
Why are you moving away from owning data centers to colocation?
Pray: We outgrew our data centers on-site. Even with virtualization, we were still outgrowing the square footage, electricity and cooling of on-site data centers. Moving to a hosted facility gave us stability in all those areas, and scalability.
What virtualization path did you go down leading up to the consolidation?
Pray: There has to be a substantial argument for a nonvirtualized environment. We have a virtualization-first policy for provisioning. Sometimes there are arguments for not virtualizing, for either large databases or large file systems. We process a lot of scientific data and oftentimes that data is enormous, with enormous storage requirements. That exceeds the capabilities of VMware. Other times -- a high-end database for example -- the limitation is the memory. If we have one Oracle database that requires 64 gig in order to run effectively, that doesn't make a good case for a virtual machine.
When you started out, what benefits were you hoping to gain through virtualization?
Pray: First are operational expenses, HVAC cooling and electricity. Next is portability. Virtual machines are much more portable than physical machines. We can transfer workloads between operating systems easily. We can transfer operating systems between cluster and hosted resources easily. Upgrades are much easier to manage. Self-service is a big proponent for allowing applications and developers to have a single pane of glass to manage all their virtual machines and have visibility into them.

What other technologies are you combining with virtualization to optimize data center efficiency?
Pray: Server density is something Vertex has really been trying to do. We've not only taken a virtualization-first approach, but we've moved into high-density computing in the form of blades. So, we're standardizing on all blade hardware to constitute our [VMware] ESX clusters, which is a much more efficient approach when dealing with cooling, electricity and footprint. On a hosted facility, they charge you by the square footage, so if you can shave three racks down to half a rack, that's a big gain.
How has virtualization translated into gains for the business?
Pray: It's given the business a lot more stability in our infrastructure. It's a much higher-availability platform than standalone hosts. We have [VMware] High Availability and DRS [VMware Distributed Resource Scheduler] fully enabled on all of our clusters, so it provides peace of mind, knowing that if something does go down, it's going to come right back up.
Has application performance improved as a result?
Pray: Several applications performed better when we virtualized them. Just moving from one older server core at a slower speed, to a newer core at a higher speed, even with a virtualization layer in between, the application owners and business owners have accolades for speed increases at the application level.
How is virtualization changing your disaster recovery strategy?
Pray: Our DR strategy is a work in progress. Ultimately the goal is to have a primary site in Cambridge and a DR site in San Diego. The strategy on a broad stroke would work with our Hewlett-Packard EVA [Enterprise Virtual Array] storage arrays replicating at the SAN [storage area network] level using continuous access. Then we will use VMware Site Recovery Manager [SRM] to tie in and orchestrate a DR failover; with all of the scripted pieces that come in failing over to another location, meaning IP address changes, DNS [domain name server] changes, protected storage groups.
Even if you start off small, use naming conventions, because as [the environment] grows two-, three-, ten-fold, it will be a nightmare to manage if you don't.
Right now we're not there, the SRM product is in place, it's been qualified, and we are working toward that end result. But, even in a virtualized platform, we are seeing increased speeds on our traditional combo backups. So we're getting a shorter completion time for systems breaks. Also, the recovery time objective is much smaller.
How difficult is this environment to manage?
Pray: It is quite complex to administer a virtual environment. There's a lot more moving parts, a lot more things to consider. You do get gains in efficiencies in administration, but there are also administration tasks that you need to consider when you are virtualized. It's extremely difficult to administer and manage something that you can't see.

What tools do you use for capacity management and other aspects of virtualization management?
Pray: We use tools like Akorri [Inc.'s BalancePoint] for capacity and monitoring. That [tool's] capacity management spans not just virtual machines but physical as well. We use Groundwork [Open Source Inc.'s network monitoring software]. It's sort of a monitor of monitors. It takes input from our Orion [network monitoring product] by SolarWinds [Inc.], VMware vSphere and Red Hat [Inc.'s Satellite Server], and warns us on down services, down servers, a downed network connection. We most recently purchased VKernel [Corp.'s] capacity management suite -- to answer the questions, "Where can I put these next dozen VMs?" and "What's it going to do to my resources?"
We also use the tool for rightsizing. Rightsizing is a very overlooked task in a virtual environment. [The tool] has an abstract slider that lets you adjust the CPU, memory, disk and network resources as necessary. Collecting data points on consumption of those resources gives you a projection of what's going to be needed in the future. It alarms when standard deviations are broken and thresholds are being crept up on, so it's a great tool for rightsizing virtual machines that are overprovisioned. The job of the tool is to make [the environment] lean, to do the job with just the resources required. That's really the whole objective of virtualization.
How hard is it to find talent that specializes in managing virtual environments?
Pray: I've been looking for a year to fill full-time and contract positions. It's a difficult skill set to fill because you need to not only know about VMware, but you have to have a fundamental understanding of networking, storage, backups and operating systems. It's really a cumulative skill set. We're talking about shared resources here. You have to know how it all ties together.
How fast is your virtual environment growing?
Pray: We're probably 80% virtualized right now and climbing. It's a difficult target to hit. This environment is very big. We have 210 sockets of ESX, so that's roughly 110 hosts across three [data center] sites. Actually, I just installed seven hosts, so that's another 14 sockets, so we're about 224 sockets. There's also a last-minute order in for another 16 hosts, so that's 32 sockets. That's just ESX hosts, not virtual machines.
How have your data center design policies and procedures changed in light of this speed of deployment?
Pray: Policies and procedures, it's not new, but the technology is moving faster than the documentation and the procedures, so it's always a catch-up game. Here and at other companies that I've worked at, policies and procedures grow organically as the technology evolves. You find as a seasoned administrator what works best and incorporate that into best practices. A lesson learned becomes something you follow through on the next time.
What is an example of a lesson learned that became a virtualization management policy?
Pray: Storage naming conventions. The environment I walked into had … three different storage arrays. One of the things that was difficult was identifying, through vSphere, all the different attributes of that particular piece of storage. I need to know where it is, what storage array it's on, what type of disks are on that array, what kind of protocol is being used to present the storage to the ESX host, what the storage is going to be used for production, validation, database virtual machines. So, one of the first things I did was put a naming convention in place for the data storage in the virtual environment. That has become exceedingly important as the storage environment grows. We're dealing with hundreds and hundreds and hundreds of terabytes across three different sites.

What advice would you give others trying to manage a virtual environment?
Pray: Manage standardization, naming conventions, logical folder structures. Make sure you have a consistency across your virtual environment and an architecture that is easy to follow and easy to scale. If you don't, it will turn into the Wild, Wild West, and it will become increasingly harder to manage over time. Even if you start off small, use naming conventions, because as [the environment] grows two-, three-, ten-fold, it will be a nightmare to manage if you don't. Once it's in place, it's much more difficult to adjust later on and correct.
Chris Pray, senior engineer for global information systems, Vertex Pharmaceuticals Inc.

20 jam per Minggu untuk problem IT, gunakan software monitoring saja

The IT department is under ever-increasing pressure to improve operational efficiency, in response to myriad factors such as the onset of the cloud paradigm and ongoing macroeconomic woes.
But a report from Kelton Global, commissioned by IT service optimization firm TeamQuest, finds that IT professionals in large enterprises are still spending an average of 20 hours a week responding to unexpected IT issues.
The report - based on a survey of 214 US enterprises with more than 1,000 employees - alleges that too few large companies are proactively addressing these problems before they arise.
Unexpected issues chew up man-hours
The survey shows that the average IT department deals with 20 unexpected issues – such as network outages or slowdowns and equipment failures – per week.
Each issue encountered takes an average of one hour to resolve, and requires the attention of five IT staff members.
Close to half of IT departments encounter weekly network slowdowns and outages, and nearly as many have to deal with underperforming applications every week.
Around 38% encounter equipment failures at least once a week, while 35% regularly encounter problems arising from third-party software and services.
Less common but more potentially disruptive, 38% of IT departments have reported experiencing a cloud outage. Of these outages, close to four in ten occurred on an internal company cloud, and could have been prevented had better resources been available.
All this time spent solving unexpected issues leaves most IT managers with little time to spare for proactive improvement efforts. The average department spends only 8% of its time on projects such as capacity planning, problem prevention, application tuning or data management.
This compares to 30% of time spent solving unexpected IT problems, and a further 25% dedicated to management of budgets, vendors and upgrade-related matters.
Around 82% of respondents admit there are times when their department does not identify performance issues before users call in a complaint, and nearly half say that they lack needed background information in the event of an incident.
Improving efficiency
Organizations that take a proactive approach deal with nearly half as many unexpected IT issues per week than their reactive peers, and require significantly less time and manpower to solve problems, the report states.
Furthermore, a massive 90% IT managers believe there is room for efficiency improvements among their departments, with nearly six in ten nominating staff training as a potential solution. Nearly half would also like to increase the size of their department.
Beyond employee assets, IT managers believe that adding or improving on diagnosis and analytics tools can improve efficiency. Another potential solution is consolidating servers to cut infrastructure admin workload, nominated as a solution by 57% of respondents.
One in two IT managers meanwhile believe improving capacity management processes could help squeeze efficiency out of their department. What's more, 68% of organizations that employ capacity management believe it has improved IT efficiency.
Yet only 11% of IT mangers think their capacity management processes can be considered “most mature,” with six in ten believing they fall in the middle of the maturity scale.
That said, 89% of IT managers believe that capacity management is risky without proper pre-planning.
Another popular method to improve efficiency is the use of virtual machines, but 80% of IT managers who use such software can name at least one related struggle, such as performance problems or identifying the best possible configuration.
Only two in five IT departments feel they have proper virtualization management in place, and more than three quarters believe their overall IT risk would shrink if they did have an adequate system.

Friday, March 15, 2013

Temukan permasalahan jaringan dengan segera..

Identify network faults and start resolving them much before your boss or an end-user calls!

Network Fault Management is all about staying current with what is happening in your network, be it an unforeseen outage or performance degradation. Detect, recover and limit the impact of failures in your network using OpManager, your 24/7 network surveillance. The powerful fault management capability of OpManager helps you isolate and resolve a fault in a wink.

Be the first to detect

Detect a fault from wherever you are! OpManager alerts you over an SMS and Email. Alerts are also sent as RSS Feeds. The capability to instantly alert you of a network trouble ensures that no time is lost in resolving it.

Isolate and troubleshoot a fault

Perform first-level troubleshooting to assess the damage and work out possible quick resolutions. Drill-down to the root cause quickly and speed up the resolution time using the interactive built-in web-based tools.

Track and resolve outages quickly

Automate resolutions by plugging-in your own programs/scripts and let a fault 'heal itself'. So, even as you are busy jumping locations and floors attending to the network needs, OpManager keeps some faults at bay. Track the faults requiring prolonged analysis by logging requests in the helpdesk.


10 keys to successful patch management

Takeaway: In honor of Microsoft Patch Tuesday, here’s a look at 10 areas IT pros need to keep an eye on to ensure a smooth patch management strategy.
The recent spate of Java vulnerabilities has required a number of large vendors to react almost instantly to optimize security levels. But as good as these reactions are, organisations urgently need to apply insightful strategic thinking to ensure that security updates are reaching the entire organisation’s IT estate.
CentraStage analyzed anonymous hardware and software data (including thousands of PCs and servers in 6,000 organisations across public sector establishments currently running our solution) and found that 40 percent of servers and workstations are missing security patches. In addition, six vendors — Microsoft, Adobe, Mozilla, Apple, Oracle, and Google — together released 257 security bulletins/advisories fixing 1,521 vulnerabilities in 2011. In 2010, these vendors fixed 1,458 vulnerabilities, demonstrating the extent of the issue as well as the numbers of bulletins we annually face.
With more and more organizations supporting remote working, the challenge isn’t just to implement patches as they are released, but to be fully confident that devices have been updated and are thus continuously safeguarded. So what areas should IT experts tick off the list for a successful patch management implementation?

1: Ensure transparency

At the heart, asset discovery is essential. If you don’t know what you’ve got, you don’t know the extent of the problem you may have. If you do nothing else, make sure you know where your IT assets are; this is a quick gain that will put your house in order.
Once the estate is established, you need to have real-time visibility of the assets you support. With the urgency in which we need to manage patches, the first secret is to not only have full awareness of the estate but instantly know the health of it too.

2: Don’t just look at the security

Knowing the whereabouts and health of the IT estate is paramount, as it provides the intelligence for ensuring its security. A study of public sector CIOs in December 2012 found that 87% of respondents were either concerned or very concerned about the risks associated with IT security breaches. In addition to security, keep an eye on securing IP, as it can be used in protecting data flows between a pair of hosts (host-to-host), between a pair of security gateways (network-to-network), or between a security gateway and a host.

3: Define your patch nirvana

While the audit and assessment element of patch management will help identify systems that are out of compliance with your guidelines, you should also work to reduce noncompliance. Start by creating a baseline — a standard you want the entire estate to comply with. Once complete, it’s easier to bring controls in line to ensure that newly deployed and rebuilt systems are up to spec with regard to patch levels.

4: Face the facts

You must know which security issues and software updates are relevant to your environment. Further analysis of our data showed that 50 percent of PCs and laptops are still running Windows XP, and 32 percent of devices are more than four years old.
Beyond patch management and the protection against vulnerabilities and exploits that by now must have caught the attention of IT leaders globally is the preparation and planning for end-of-life Windows XP support. If you do not replace, there is no way to safeguard. If you do replace, this has implications on expenditure. Make sure that you have a realistic view of patch management and its limitations, but also ask whether the discipline of patch management indirectly ensures the infrastructure and IT estate is viable from support and budget perspectives.

5: Do it your way with software policies

You can customise policies targeted at filters or groups at the account or profile level. The filter targets can be either the default filters provided within your account or any custom filters you have previously defined. The secret here is to define custom filters or groups to identify devices with specific criteria. One or more of these filters can be associated with a policy to target those devices.
This goes back to your baseline creation.  Set the policies from the outset and customisation will be a simple step forward.

6: Get the timing right

Why wouldn’t you implement a patch management update as soon as you can? With baseline mechanisms in place, there’s no need to delay. However, you should consider the time of day for updates by policy — what time will have the least impact on day-to-day business? The ideal timing for updating patches should follow any rollout best practice. Consider the day of the week, the impact on the business if something doesn’t go smoothly, and whether there is sufficient time and resources to rectify if necessary. If your IT management solution is on-premise rather than cloud- based, you might have to take responsibility of scale and load of the update.

7: Audit first — is it too broken to be fixed?

Gaining visibility of devices that are vulnerable is crucial, but so is analysing the overall health of each device. Ensure that all devices are audited prior to rolling out patches or patch policies. There could be a more urgent matter requiring attention before the device can be brought in line.

8: Keep it simple

We are led to believe that the bigger the enterprise estate, the more complex the management. But in most cases, solutions are easily scalable. The issue comes with usability. As complexity increases (and in some cases, the number of solutions and providers also grows), the technology team is used more and more to ensure the estate is kept up to date. Keep usability as simple as possible. There are solutions that do not even require a technically skilled person to ensure the estate is kept up to date quickly and easily.

9: Consider automated solutions

Often, patch management is a distress purchase because vulnerabilities such the ones we’ve seen recently place patch management in a crisis management budget and not an ongoing IT budget. Of course, this has financial implications. Some enterprise IT management solutions may save you money by providing tools that automatically audit and monitor. If automation is behind the scenes, it doesn’t interrupt the business and will keep all software solutions running smoothly, without input.

10: Visualise your patch management

Make sure you can see a graphic representation of your patch management, tailored by severity and whether the patch requires a reboot or user interaction. This also fundamentally supports measurement and service level agreements by reporting SLAs in a way that’s visual. Not only will this help with compliance, but it will demonstrate that IT is making a difference to the business. This makes for better relationships throughout an organisation, whether internal or external.
Ian van Reenen is CTO for CentraStage.

Tuesday, March 12, 2013

7 keahlian TI paling dicari di 2013

7 Keahlian TI Paling Banyak Dicari di 2013

Dibaca: 66238
 Komentar : 50
ShutterstockIlustrasi - Kebutuhan tenaga kerja di bidang teknologi informasi diyakini akan tetap tinggi di tahun 2013.

Namun, dengan banyaknya bahasa program, platform, protokol, dan teknologi lain, sangat sulit untuk mengetahui mana yang harus dipelajari.

Oleh karena itu, akan sangat baik apabila Anda mengetahui terlebih dahulu tren teknologi apa yang akan banyak dicari perusahaan di tahun 2013 ini.

Nah, berdasarkan survei dan berbagai sumber data lain, berikut 7 kemampuan yang kemungkinan besar akan dicari di tahun 2013, seperti dikutip dari ReadWrite.

1. Segala hal yang berkaitan dengan komputasi awan (cloud computing)

Di tahun 2013, komputasi awan masih akan menjadi tren di dunia TI enterprise. Hal tersebut terbukti dari tingginya permintaan akan karyawan yang mengerti akan teknologi komputasi awan.

Secara spesifik, perusahaan akan banyak mencari pengembang (developersoftware yang memiliki kemampuan dalam hal virtualisasi, Software-as-a-Service (SaaS), dan juga familier dengan Platform-as-a-Service (PaaS).

Menurut survei, 25 persen perusahaan responden berencana untuk mengambil karyawan dengan kemampuan SaaS dan komputasi awan di 2013. Secara umum, kata SaaS dan virtualisasi akan banyak disebut dalam situs-situs pencari kerja.

2. Manajer Proyek TI

Tidak semua pekerjaan yang ada di dunia TI berkaitan dengan hal teknis. Membuat program, menjaga infrastruktur, dan mendesain software memang penting, tetapi tidak akan berguna apabila tidak ada orang yang menjaga alur proyek hingga selesai. Oleh karena itu, tidak aneh apabila 40 persen eksekutif di bidang TI sedang mencari manajer di tahun 2013 ini.

3. JavaScript

Dalam pembuatan situs, HTML dan CSS memang penting. HTML merupakan bahasa di balik pembuatan situs. Sedangkan CSS merupakan bahasa pemrograman untuk desain sebuah situs. Nah, kedua hal tersebut tidak akan lengkap dengan JavaScript yang mampu membuat berbagai hal menjadi interaktif.

Perusahaan-perusahaan saat ini tentunya ingin membuat situs mereka seinteraktif mungkin. Oleh karena itu, tidak aneh apabila pegawai dengan kemampuan JavaScript akan banyak dicari di 2013.

4. Java/J2EE

Menurut survei yang dilakukan oleh Dice, Java dan platform pengembangan J2EE akan menjadi salah satu kemampuan yang dicari pada 2013.

Berbeda dari teknologi baru seperti Android dan HTML5, kebutuhan akan kemampuan Java sebenarnya stagnan dari tahun ke tahun, tetapi kebutuhan tersebut mulai meningkat belakangan ini.


PHP memang mulai kehilangan pamor dibandingkan dengan pengembangan aplikasi mobile atau teknologi programming situs jenis baru, seperti HTML5. Namun, PHP tetap saja dianggap penting.

Hingga saat ini, bahasa pemrograman ini sudah digunakan di lebih dari 20 juta situs dan berada di balik situs-situs besar, seperti Facebook dan Wikipedia. Blog atau situs yang dibangun menggunakan Wordpress atau Drupal juga menggunakan PHP.

Dengan banyaknya situs yang masih menggunakan PHP, sangat wajar apabila kemampuan ini tetap banyak dicari di 2013.

6. iOS

Meningkatnya penjualan perangkat tablet dan smartphone berbasis iOS membuat lowongan pekerjaan terkait sistem operasi mobile buatan Apple tersebut juga meningkat.

Pengembangan aplikasi untuk perangkat iPhone dan iPad memang telah menjadi tren dalam beberapa tahun belakangan ini, tetapi peningkatan drastis sebenarnya baru terjadi dalam 2 tahun kemarin. Kebutuhan orang yang mengerti akan iOS meningkat di tahun 2011 dan 2012.


Apa jadinya dunia situs tanpa HTML? Bahasa pemrograman inilah yang telah menjadi dasar bagi situs, dengan cascading style sheets (CSS) yang berhasil membuat situs tampak indah, dan JavaScript menambah fungsi interaktif.

Sangat wajar apabila permintaan pegawai dengan kemampuan HTML meningkat di tahun 2013 ini, mengingat mulai banyaknya situs yang menggunakan bahasa program ini.

Faktanya, pentingnya sebuah situs juga akan terus meningkat, seiring dengan berkembangnya perangkat tablet, smartphone, dan layanan berbasis awan. Konsumen tetap membutuhkan situs untuk mengakses layanan SaaS yang ada di awan. Selain itu, dari sebuah studi, diketahui banyak pengguna tablet yang tetap gemar mengakses situs.

Saat ini, bahasa pemrograman web sudah mencapai HTML5. Bahasa tersebut pun sudah didukung oleh berbagai browser versi terbaru.

Sedangkan bahasa program desain situs CSS telah mencapai versi 3.