Data Center is our focus

We help to build, access and manage your datacenter and server rooms

Structure Cabling

We help structure your cabling, Fiber Optic, UTP, STP and Electrical.

Get ready to the #Cloud

Start your Hyper Converged Infrastructure.

Monitor your infrastructures

Monitor your hardware, software, network (ITOM), maintain your ITSM service .

Our Great People

Great team to support happy customers.

Saturday, May 25, 2013

4 hal dasar untuk manajemen jaringan Anda



How to avoid IT strife with an effective management baseline






Amazon and Google made headlines last year for their highly publicized outages, proving just how susceptible companies are to IT disruptions, and the negative impact these can have on sales and productivity.
So before jumping into projects, IT and network managers should establish baselines to understand exactly where they stand.  As networks become more complex (virtual vs. physical, wired vs. wireless, etc.), the pressure is on to improve the performance and availability of business critical and customer-facing applications.
IT professionals would benefit immeasurably by setting four key baselines aimed at giving them control of their network:
1. Inventory baseline: You can't control what you don't know exists.
2. Performance baseline: Start with the “big five”,CPU, memory, disk and interface utilization as well as ping latency, then gauge key application consumption and optimal thresholds.
3. Configuration baseline: Understand how current configurations impact security, compliance and overall control of the network.
4. Bandwidth and data flow baseline: Measure what's happening on the network, when and how much bandwidth is consumed.
A company's ability to grow hinges on IT performance and availability, yet many organisations fail to recognise IT's impact on the bottom line until it is too late.  Consider the following scenarios: The e-commerce application lags, or worse, become unavailable. Corporate email goes offline and severely impacts on productivity. Business critical applications such as SalesForce.com or SAP become unresponsive.
The financial loss could be crippling.  No matter how minor or severe, IT disruptions impact everyone. And as IT environments become increasingly complex, the onus falls on IT departments to optimise effectively for performance and availability.
Optimising the infrastructure starts with establishing an IT baseline. This becomes a measuring stick for understanding
a. How the network, applications and infrastructure perform
b. Where and why performance comes up short
c. Actionable steps for continuous optimisation.
Creating a baseline includes four essential elements:  inventory, performance, configuration and flow.
1.  Inventory baseline
While most IT managers have visibility into the core infrastructure, awareness of edge devices is much more opaque.
Unknown devices complicate network management, as these can consume significant resources and impact the performance of critical IT assets. To get a better handle on the network, there are three key areas to baseline: hardware, systems and applications.  Is everything up-to-date and running on the latest revision levels?  Have all security patches been deployed?  And how does everything in the infrastructure connect?
Understanding the interdependencies on a network is especially important for uncovering and resolving issues quickly. If an employee reconfigures a router by moving it from one sub-net to another and causes a loop in the network, the change can have a catastrophic effect across the entire network.      
Having an inventory baseline makes problem discovery and resolution much easier, and helps to control costs by identifying under-utilised resources that can be redeployed.
2.  Establish performance thresholds
Network admins need to know how much of the “big five” (which we mentioned before) their mission-critical services and applications consume. More importantly, they must know their optimal thresholds like performance. If a network device has 98 per cent CPU utilization, there is a good chance that device is about to fail, impacting on network availability and performance.   
The key to performance baselines is to understand the acceptable threshold levels for each network device and server on the network, and having a real-time alert system for when thresholds are broken.
3. Configuration baselines
Security, compliance and control are on every CIO's priority list. They are also essential elements to baseline. Looking across devices on the network, system administrators need to ask themselves the following: Are they all running authorised configurations? Have all security features been enabled? Are default passwords still being used?  Can you generate an audit trail of all configuration changes? Misses in these areas could result in damaging security and compliance breakdowns.      
The most advanced IT departments enforce rigorous configuration change control policies. They archive authorised configurations, receive real-time alerts when configurations change, and generate reports answering the critical questions: who, what and when. This makes corrective action – and proving compliance – much easier.
4.  Bandwidth and data flow
This baseline helps IT professionals understand how network capacity and bandwidth are consumed. A complete flow baseline breaks down capacity and bandwidth use by user, department and application.
Optimising network bandwidth and capacity is critical for enhancing performance and productivity. IT managers must understand what's happening on the network and how much bandwidth is being consumed. The end goal is to ensure that business-critical    applications have the bandwidth they need to operate at maximum efficiency.
Understanding how much network capacity employees’ use also impacts the budget. From a bird’s-eye view, the company might seem to need more bandwidth, while in reality it might save 30 per cent of existing bandwidth by identifying unauthorised use of bandwidth hogs like YouTube.
Today’s IT environments are dynamic and complex.  Changes occur every day that affect performance and availability.  But a baseline of assets and performance thresholds gives IT a measuring stick they can leverage in real time to enhance overall network performance and efficiency. 
Rich Makris is a Senior Sales Engineer at Ipswitch




Apa dampak IT bagi bisnis ?

Cisco mengadakan survey terkait dengan dampak IT terhadap bisnis. Berikut hasilnya.


Innovator, Firefighter, or Ghost? Cisco Survey Explores IT's Impact on Business

While the majority (63 percent) of IT professionals are confident in their ability to respond to the needs of the business, almost a third (27 percent) still equated the visibility of their IT department into their company's business initiatives to a foggy day in London, according to the 2013 Cisco Global IT Impact Survey.

The top research findings reveal:
· Applications and user expectations are becoming more complex: almost three-fourths of IT participants (71 percent) reported that IT is deploying more applications today than one year ago.
· IT and the network are increasingly recognized as enabling the business: a higher percentage (78 percent) stated the network is more critical for delivering applications than it was at this time last year.
· IT-business alignment is improving, but IT is not always involved when the decisions are made: nearly nine out of 10 (89 percent) IT leaders collaborate with line of business leaders at least on a monthly basis, indicating a mutual business understanding of the critical and growing role of the network for application delivery. However, more than one-third (38 percent) of IT professionals surveyed said they are brought into the planning and deployment process late.
Among other findings, the Cisco Global IT Impact Survey also provided insight into IT sentiment toward emerging trends such as Software Defined Networking (SDN) and the Internet of Things. Results showed that one-third (34 percent) say they've seen an actual SDN deployment as often as they've seen Bigfoot, Elvis, or the Loch Ness Monster, while less than half (42 percent) claim to be vaguely familiar with the Internet of Things.
Additional findings:

Increasing Alignment between IT and Business Leaders, But More Work Is Needed

- When asked to compare the visibility of IT within their organization, 36 percent said "innovator" was the best description of how business leaders viewed their role. Additionally, 34 percent claimed "orchestrator" was the best fit, 15 percent chose "firefighter," 7 percent said "ghost," and 7 percent selected "fortune teller."
- Although survey data indicates the majority of IT leaders feel they are closely aligned with business practices, business applications are still being deployed without their knowledge. More than three-quarters (76 percent) of IT said business leaders and other non-IT teams roll out new applications without engaging IT either "all the time" or "sometimes."
- Furthermore, more than one-third (38 percent) of IT professionals surveyed claim they are brought into the planning and deployment process either "during the rollout process" or "the day before rollout." This data indicates that when businesses move ahead with new initiatives without first consulting IT, the network may be challenged with handling the new applications.
- IT leaders were asked to describe their attitudes toward asking business decision makers for budget toward network infrastructure upgrades. 18 percent said they would rather "break out of prison or train for a triathlon" than ask for additional budget.
- When asked how they know if they're doing a good job, one-quarter (26 percent) said "nobody calls us." Nearly another quarter (23 percent) chose "I sleep at home instead of the office."

Industry's New Business Opportunities Challenge Network Readiness

- Even with the business understanding of the growing role of the network for application delivery, 82 percent of respondents acknowledged that user experience with standard business applications is affected by network performance, even in basic applications such as Web, file services and email.
- When asked about the leading causes responsible for slowing down a new application rollout over the past year, most cited budget (34 percent), while 26 percent of respondents claimed data center infrastructure readiness, cloud readiness and network limitations such as bandwidth. One-quarter (25 percent) cited "general procrastination" as the leading cause.
- 71 percent are planning to deploy SDN solutions in the next 12 months. The main reasons? One-third (33 percent) cite cost savings, while another third (33 percent) said fast scalability of infrastructure.
- Almost three quarters (71 percent) report IT is deploying more applications than a year ago, but 41 percent claimed their networks were not ready to support "bring your own device" (BYOD) policies, while 38 percent said they were not ready to support cloud deployments.
- When asked to gauge their readiness for Internet of Things applications and deployments, nearly half (48 percent) believe it will open up new business opportunities.
- Survey participants ranked cloud readiness (29 percent) as the most important network initiative to their business in the upcoming year, followed by "converging IT technology and operations technology" (28 percent) and "data center consolidation/virtualization" (27 percent).
- When asked to rank the most difficult IT initiative over the past year, moving applications to the cloud (40 percent) ranked first, with data center virtualization ranking second (38 percent). This data aligns with the 2012 Cisco Global Cloud Networking Survey, which found that some IT professionals would rather get a root canal, dig a ditch, or do their own taxes than address network challenges associated with cloud deployments.
- Also consistent with the results of the 2012 Cisco Global Cloud Networking Survey was security being selected as the No. 1 roadblock to a successful implementation of cloud services or mobility, as 80 percent cited it as a challenge.

Business Service Managament, fitur menarik AppManager



Business Service Management

ManageEngine® Applications Manager helps enterprises ensure their critical business applications have high uptime. The Business Services Management capability adds a business context to monitoring IT resources and helps the IT Team to proactively monitor the servers, applications, network services and ensure IT Operations have visibility on how IT resources impact business applications. By better visibility, IT Managers can ensure adequate resources are provisioned for IT Services that impact the business and ensure IT meets the goals of the business.
Traditional Systems and Network Management tools use a siloed approach to monitoring. This makes the workflow for the Operations Team more complex and does not help the IT Team to troubleshoot performance issues quickly. However ManageEngine Applications Manager gives an integrated view across technology silos.
Complex infrastructure need powerful tools to simplify monitoring. Today's Web Applications are N-Tiered and depend on various resources like file servers, databases, Web Servers, middleware components, legacy web services linked using SOA etc. Many of these may be clustered for scalability and high availability. This kind of a setup needs tools that help define proper correlation between resources.
ManageEngine® Applications Manager helps monitoring these complex infrastructure and makes performance and SLA monitoring meaningful for all stakeholders.

Integration with ITIL Ready ServiceDesk :ManageEngine® Applications Manager also integrates with ManageEngine ServiceDeskPlus and automatically logs a ticket so that it improves your workflow. It also integrates with other 3rd Party Ticketing Systems.
Integration with Network Monitoring Software - ManageEngine OpManager : The Network Monitoring Connector enables the user to monitor the availability and performance of network devices such as routers and switches along with performance of application servers, databases, webservers and web services.
Integration with SAN Availability and Monitoring Software - ManageEngine OpStor : ManageEngine Applications Manager integrates with ManageEngine OpStor via the ManageEngine OpStor SAN Monitoring Connector. In addition to monitoring application and server performance with ManageEngine Applications Manager, the connector enables users to monitor performance and availability SAN and other storage devices. There is unified reporting, alarm management and SLA Management.

Key Benefits of Business Service Management for the Enterprise

Add a Business Context to your IT Resources 
ManageEngine® Applications Manager gives a business centric view to your IT and helps you know what business processes are affected when there is downtime or performance bottlenecks. A single performance dashboard also helps consolidate the heterogeneous monitoring tools and scripts used within the enterprise and helps IT to have better control.

Easy Troubleshooting 
The integrated application, server and database monitoring capability ensures IT administrators can troubleshoot with ease. A single console for monitoring, along with the Root Cause Analysis capability empowers users to take remedial action fast.

Define dependencies and improve Fault Management and SLA Management 
ManageEngine Applications Manager supports the ability to group resources in a hierarchical model. This along with the support for configuring dependencies ensure clustered setups and other high availability mechanisms are taken care while alarms are sent to the Operations team or when SLA reports are generated.

Reduce Application Support and Maintenance Costs 
With ManageEngine Applications Manager, IT Administrators can focus on resolving problems, planning inventory and other activities that are core to the business. The support for industry best practices ensures, your infrastructure is better managed. Additionally the agentless monitoring model used reduces setup time and man power costs and adds to the savings for the enterprise.

:: Application Performance Monitoring Solutions

4 elemen dasar untuk strategi Monitoring Performansi Aplikasi



The Anatomy of APM – 4 Foundational Elements to a Successful Strategy

April 04, 2012
by Larry Dragich
Auto Club Group
By embracing End-User-Experience (EUE) measurements as a key vehicle for demonstrating productivity, you build trust with your constituents in a very tangible way. The translation of IT metrics into business meaning (value) is what APM is all about.
The goal here is to simplify a complicated technology space by walking through a high-level view within each core element. I’m suggesting that the success factors in APM adoption center around the EUE and the integration touch points with the Incident Management process.
When looking at APM at 20,000 feet, four foundational elements come into view:


- Top Down Monitoring (RUM)


- Bottom Up Monitoring (Infrastructure)


- Incident Management Process (ITIL)


- Reporting (Metrics)



Top Down Monitoring

Top Down Monitoring is also referred to as Real-time Application Monitoring that focuses on the End-User-Experience. It has two has two components, Passive and Active. Passive monitoring is usually an agentless appliance which leverages network port mirroring. This low risk implementation provides one of the highest values within APM in terms of application visibility for the business.
Active monitoring, on the other hand, consists of synthetic probes and web robots which help report on system availability and predefined business transactions. This is a good complement when used with passive monitoring to help provide visibility on application health during off peak hours when transaction volume is low.

Bottom Up Monitoring

Bottom Up Monitoring is also referred to as Infrastructure Monitoring which usually ties into an operations manager tool and becomes the central collection point where event correlation happens. Minimally, at this level up/down monitoring should be in place for all nodes/servers within the environment. System automation is the key component to the timeliness and accuracy of incidents being created through the Trouble Ticket Interface.

Incident Management Process

The Incident Management Process as defined in ITIL is a foundational pillar to support Application Performance Management (APM). In our situation, Incident Management, Problem Management, and Change Management processes were already established in the culture for a year prior to us beginning to implement the APM strategies.
A look into ITIL's Continual Service Improvement (CSI) model and the benefits of Application Performance Management indicates they are both focused on improvement, with APM defining toolsets that tie together specific processes in Service Design, Service Transition, and Service Operation.

Reporting Metrics

Capturing the raw data for analysis is essential for an APM strategy to be successful. It is important to arrive at a common set of metrics that you will collect and then standardize on a common view on how to present the real-time performance data.
Your best bet: Alert on the Averages and Profile with Percentiles. Use 5 minute averages for real-time performance alerting, and percentiles for overall application profiling and Service Level Management.
Conclusion
As you go deeper in your exploration of APM and begin sifting through the technical dogma (e.g. transaction tagging, script injection, application profiling, stitching engines, etc.) for key decision points, take a step back and ask yourself why you're doing this in the first place: To translate IT metrics into an End-User-Experience that provides value back to the business.
If you have questions on the approach and what you should focus on first with APM, see Prioritizing Gartner's APM Model for insight on some best practices from the field.
Larry Dragich is Director of Enterprise Application Services at the Auto Club Group.

5 dimensi Gartner tentang monitoring performansi aplikasi (APM)




Gartner's 5 Dimensions of APM

by Pete Goldin
Gartner's recently published Magic Quadrant for Application Performance Monitoring defines “five distinct dimensions of, or perspectives on, end-to-end application performance” which are essential to APM, listed below.
Gartner points out that although each of these five technologies are distinct, and often deployed by different stakeholders, there is “a high-level, circular workflow that weaves the five dimensions together.”

1. End-user experience monitoring

End-user experience monitoring is the first step, which captures data on how end-to-end performance impacts the user, and identifies the problem.

2. Runtime application architecture discovery, modeling and display

The second step, the software and hardware components involved in application execution, and their communication paths, are studied to establish the potential scope of the problem.

3. User-defined transaction profiling

The third step involves examining user-defined transactions, as they move across the paths defined in step two, to identify the source of the problem.

4. Component deep-dive monitoring in application context

The fourth step is conducting deep-dive monitoring of the resources consumed by, and events occurring within, the components discovered in step two.

5. Analytics

The final step is the use of analytics – including technologies such as behavior learning engines – to crunch the data generated in the first four steps, discover meaningful and actionable patterns, pinpoint the root cause of the problem, and ultimately anticipate future issues that may impact the end user.

Applying the 5 dimensions to your APM purchase

“These five functionalities represent more or less the conceptual model that enterprise buyers have in their heads – what constitutes the application performance monitoring space, ” explains Will Cappelli, Gartner Research VP in Enterprise Management and co-author of the Magic Quadrant for Application Performance Monitoring.
“If you go back and look at the various head-to-head competitions and marketing arguments that took place even as recently as two years ago, you see vendors pushing one of the five functional areas as: what you need in order to do APM,” Cappelli recalls. “I think it's only because of the persistent demand on the part of enterprise buyers, that they needed all five capabilities, that drove the vendors to populate their portfolios in a way that would adequately reflect those five functionalities.”
The question is: should one vendor be supplying all five capabilities?
“You will see enterprises typically selecting one vendor as their strategic supplier for APM,” Cappelli continues, “but if that vendor does not have all the pieces of the puzzle, the enterprise will supplement with capabilities from some other vendor. This can make a lot of sense.”
“When you look at some of the big suites, and even the vendors that offer all five functionalities, in most cases those vendors have assembled those functionalities out of technologies they have picked up when they acquired many diverse vendors. Even when you go out to buy a suite from one of the larger vendors that offers everything across the board, at the end of the day you are left with very distinct products even if they all share a common name.”
For this reason, Cappelli says there is usually very little technology advantage associated with selecting a single APM vendor over going with multiple vendors providing best-of-breed products for each of the five dimensions. However, he notes that there can be a significant advantage to minimizing the number of vendors you have to deal with.
“Because APM suites, whether assembled by yourself or by a vendor, are complex entities, it is important to have the vendor support that can span across the suite,” Cappelli says. “So in general it makes sense to go with a vendor that can support you at least across the majority of the functionalities that you want.”
“But you do need to be aware that the advantage derived from going down that path – choosing a single vendor rather than multiple vendors – has more to do with that vendor's ability to support you in solving a complex problem rather than any kind of inherent technological advantage derived from some kind of pre-existing integration.”

Monitoring dan manajemen performansi aplikasi


Application Performance, atau performansi aplikasi, saat ini semakin banyak diminati oleh banyak perusahaan. Kepentingan mereka adalah untuk mengetahui kinerja, respons aplikasi, hingga problem yang muncul terkait dengan permasalahan yang muncul.


APM Convergence: Monitoring vs. Management

March 06, 2013
by Larry Dragich
Auto Club Group
APM is entering into a period of intense competition of technology and strategy with a multiplicity of vendors and viewpoints. While the nomenclature used within its space has five distinct dimensions that elucidate its meaning, the very acronym of APM is in question: Application Performance ... Monitoring vs. Management.
It's strange to think that we would not normally use monitoring and management synonymously, but when used in the APM vernacular they seem to be interchangeable. This may be a visceral response, but I see the APM idiom converging on itself and becoming a matter of expectations vs. aspirations.
Application Performance Monitoring is the expectation of the tool sets themselves and how to implement them. Gartner provides five dimensions that describe these technologies which are not meant to be so "prescriptive" as much as they are "descriptive". Read: Gartner Q&A Part 1: Analytics vs. APM.
Application Performance Management is the aspiration of what we want the APM space to become. It is the umbrella over the other disciplines (e.g. enterprise monitoring, performance analysis, system modeling, and capacity planning).
To illustrate this concept consider The Anatomy of APM, which gives you a blueprint of the high-level elements to include when implementing an APM solution. Each element goes deep as a broad category, and each category encompasses specific monitoring tools that support the end-user-experience (EUE).


The EUE is at the heart of it all, and has become the focal point that allows us to make the connection to the business and speak to them in a language they can appreciate. Understandably, the technology overlap across the elements can leave even the savviest IT leader perplexed about APM and what it means.
Application Performance Management has the potential to become an IT discipline, however the overall concepts outlined here need to penetrate deeper into the IT culture in order for this to emerge as a discipline. Just as the ground will heave in a winter frost and then relinquish its state during the spring thaw, so will the monitoring technologies expand and converge as the market demands and new ideas are born.
No matter where you believe APM's heritage has come from (e.g. BSM, BTM, NPM, etc.), monitoring and management will both have their roles to play in the APM journey. APM is the translation of IT metrics into business meaning (value). How that is actually accomplished however, is another story.

Conclusion

It's important to consider that APM is more than just an acronym but a journey, a movement, a new way of thinking, and a new frame of reference that is stitching together business value with IT metrics. APM is promising to become the conduit that helps IT cross the chasm of "an expense to be squeezed," and land as a true business partner providing value.
Larry Dragich is Director of Enterprise Application Services at the Auto Club Group.

Related Links:


Paessler memperkenalkan sertifikasi Partner nya.




Paessler Intros Partner Certification Program For PRTG Network Monitoring Software

Network monitoring specialist Paessler AG is rolling out its first-ever partner certification program for its PRTG network monitoring software.
Paessler, a Nuremberg, Germany-based company with a growing footprint in the U.S. market and the channel, is offering both a sales and technical certification program for its flagship PRTG software. The aim of these programs, Paessler said, is to provide a level of distinction to some of its top solution provider partners and help those partners increase their visibility among potential clients.
"We have been asked been asked by our partners -- especially by VARs, system integrators and smaller resellers -- 'how can we qualify and how can we show our qualifications,'" said Thomas Timmermann, vice president of business development, North America at Paessler. "[Partners said] 'we have been working with PRTG for years, we know the software, we know the tool, but there are no certifications programs, so we can't prove this.'"
To change this, Paessler is now encouraging partners to apply for either the Paessler Certified Sales Professional certification or the Paessler Certified Monitoring Professional certification. Partners can apply for either or both programs free of charge, Paessler said, and all existing partners are eligible to do so.
Both certifications require solution providers to complete an online test, which they can request to take through Paessler's website. There is a test for each of the two certifications; solution providers taking the Paessler Certified Sales Professional test must demonstrate knowledge related to PRTG licensing information, upgrade processes and maintenance renewals, while those taking the Paessler Certified Monitoring Professional test will need to demonstrate a more technical knowledge of PRTG and network infrastructure. Solution providers that pass the test will receive a certificate and digital "Paessler Certified" icon they can display on their websites. Paessler said those that attain the certifications will also be distinguished on its own website as being PRTG-certified.
Down the line, Paessler's Timmermann said holding these certifications will become necessary to achieve either Paessler's Silver or Gold partner status.
"The next step that we are going to introduce is that [partners] will need at least one Sales and one Monitoring Professional Certification to reach certain partner levels," Timmerman said.
Josh Sanders, a systems engineer at Lockstep Technology Group, a Duluth, Ga.-based solution provider and Paessler partner, said the new Paessler certification program is expected to help Lockstep on-board new customers.
"When it comes to new customers, people who have not worked with Lockstep in the past, I think the certification program is key because it gives them a comfort level that we actually know what we are doing," said Sanders, who has already achieved the Paessler Certified Monitoring Professional certification.
Paessler's certification program underscores the company's growing engagement with the U.S. channel. Paessler today boasts roughly 1,000 solution provider partners in the U.S., with roughly 5,000 partners worldwide. Paessler said it closed out 2012 with 40 percent of its total sales going through the channel, and that its U.S. and Canada channel sales have surged more than 90 percent.
As demand for its PRTG software continues to grow -- Paessler's sales were up 57 percent year-over-year in 2012 -- the company said it's looking to expand its partner base, particularly in the U.S.
James Harden, managing partner at Lockstep Technology Group, said selling PRTG has been a big differentiator for Lockstep, especially because the software is so customizable. PRTG comes with customizable sensors, meaning solution providers can tweak the software to monitor the exact applications or network components an end customer wants to monitor.
"The PRTG product allows you to write these custom sensors, and that's why we really chose Paessler to partner with because it allows us to have a more rounded monitoring practice," Harden said.
Paessler has made other recent enhancements to its flagship PRTG product, including support for VMware's vSphere 5, which enables monitoring for ESXi 5 hosts, virtual machines and vCenter 5.
PUBLISHED MAY 24, 2013

Windows 8 gagal mengangkat penjualan PC




Gartner: Windows 8 failed to kick-start PC market

Summary: The latest report released by research firm Gartner suggests that dwindling PC sales are signalling a turn in the PC market, and Windows 8 has done nothing to stop it.
Research firm Gartner says that an estimated drop of 4.9 percent in worldwide PC sales over the fourth quarter has signalled a shift in the market.
In Q4, PC shipments worldwide fell by an estimated 4.9 percent, according to the research firm. A total of 90.3 million units were sold, but a shift in both consumer habits and the fragile state of the economy played a part in making sure PC manufacturers had little to celebrate as their products were shunned in favor of tablets.
Mikako Kitagawa, principal analyst at Gartner said:
“Tablets have dramatically changed the device landscape for PCs, not so much by ‘cannibalizing’ PC sales, but by causing PC users to shift consumption to tablets rather than replacing older PCs. This transformation was triggered by the availability of compelling low-cost tablets in 2012, and will continue until the installed base of PCs declines to accommodate tablets as the primary consumption device."
Rather than asking for a new PC for Christmas, Gartner says that the plethora of cheap tablets made sure that they replaced PCs as the 'must have' gadget during the holiday season. Although there were a number of cheap notebooks on offer, this did little to excite the Christmas cheer for PC vendors.
However, it may not all be doom and gloom for PC makers. "On the positive side for vendors, the disenfranchised PCs are those with lighter configurations, which mean that we should see an increase in PC average selling prices (ASPs) as users replace machines used for richer applications, rather than for consumption,” Kitagawa said.
Many of us waited to see if Microsoft's new operating system, Windows 8, would have any major impact on PC sales. Gartner says that Windows 8 failed to revitalize the PC market in Q4, mainly due to "lackluster form factors" in PC vendor offerings and a "lack of excitement" which is found in the touch element of tablets.
The research firm also says that HP managed to climb back up to secure the top spot in worldwide PC shipments against rival Chinese firm Lenovo. However, Hewlett-Packard's shipment rate did not change compared to a year ago, whereas Lenovo did experience the best growth rate among the top five PC vendors. Dell came in third place -- although its sales fell by 21 percent year-on-year -- whereas Acer came in fourth with a drop of 11 percent in PC shipments.
gartner pc sales estimates q4 2012
Over 2012, PC shipments reached 352.7 million units, which Gartner says is a 3.5 percent decline based on figures from 2011. HP still retains the top spot overall with a 16 percent marketshare and Lenovo is second with 14.8 percent. However, Asus has shown the highest rate of growth with shipments increasing 17.1 percent.
gartner pc sales estimates q4 2012

Penjualan Chip meningkat, meskipun penjualan PC menurun




Chip sales up slightly in Q1 despite continued PC market decline

Summary: Sales in semiconductors are up month-on-month and year-on-year, but the balancing act between PCs and tablets is beginning to help the memory and chip making industry recover.
Sales of semiconductors were up by 0.9 percent year-on-year during the first quarter.
Figures from the Semiconductor Industry Association (SIA) show that worldwide sales in semiconductors — such as memory and chips in PCs, tablets and smartphones — hit $23.48 billion during March, an increase of 1.1 percent on February's totals of $23.23 billion. (All monthly sales represent a three-month moving average.)
It's far from a significant month-on-month jump, considering the semiconductor industry has been in a steep decline over the past three years, but the market is beginning to stabilize after its latest peak in 2010. 
There's good news and bad. 
The Asia-Pacific market saw a massive 6.9 percent in sales, while Europe kept things in a modest check with a 0.7 percent increase. However, the Americas saw a 1.5 percent drop, while Japan's market plummeted by 18 percent year-over-year, which accounted for the lack of any significant uptick in global sales progress.
Screen Shot 2013-05-06 at 08.49.15
(Image: Semiconductor Industry Association)
Semiconductors still matter. After all, they're the bits of silicon that power your handheld and portable devices, as well as being the parts of the components that still make the clunky desktop PC tick over. 
You can see that during the Thai floods and the Japan earthquake, as well as it being in the middle of the world's most painful recession in generations, the semiconductor industry was hit hard. People weren't spending because the global economy ground to a halt.
But during the period in which the iPad was first launched in 2010 — the iPad didn't just help the tablet market, it actively carved it out of nothingness — sales rocketed. But almost as soon as it did, the PC market began to decline. At this point, while tablet uptake is on the rise and PCs are generally down, the balance is beginning to level out.