With your users expecting full access to the enterprise network on their mobile devices, have you implemented a security plan?

Nathan Wenzler has some great advice here, talk to one of our experts who can help you refine and implement your solutions from our security partner, Palo Alto:

Organizations with mobile workforces face serious challenges when it comes to their overall cybersecurity posture. As more users leverage laptops, tablets, smartphones and other portable devices, security risks begin to increase in three areas which can be simply categorized as:

  • What users bring in to the environment
  • What users take out of the environment
  • An overall increase in scope of what can be attacked

Looking at the risk of “what users bring in to the environment”, companies must deal with devices being attached to their corporate networks which have also connected to a user’s home network, public Wi-Fi hotspots and any number of other unsecured networks. These systems are likely not as well protected as those governed by enterprise-class endpoint security tools, and thus, run a much larger risk of being infected with malware, viruses, ransomware, worms and other malicious programs used by attackers. When a user’s compromised device is connected to a corporate network, it introduces the potential for these malicious tools to launch more attacks against the other devices on the network, or serve as a point of entry for a cybercriminal to the network, bypassing all perimeter defenses. There are many strategies that can be employed to defend against this sort of problem, including, but not limited to:

  • Set strong policies which require that devices connected to the corporate network have endpoint protection software which is up to date and that systems are fully patched
  • Create wireless networks which are available for user’s non-work systems which they can utilize for Internet access and other functions without allowing them to be connected directly to the internal corporate network
  • Develop Internet-facing services for email, messaging and other basic corporate functions which users can access remotely without need of internal access
  • Assign corporate-owned mobile devices to users, instead of allowing personally-owned devices, which have the same endpoint protection software, access controls and other corporate governance as any other device on the internal network

As for “what users take out of the environment”, trying to keep classified or critical, proprietary data safe is a primary need of any organization, regardless of their vertical. Intellectual property theft is a very real problem for almost any organization, and even in areas where it may not seem as obvious. Take universities and other organizations in academia, where research papers and doctoral theses can generate millions of dollars in revenue from grants, government investment or corporate efforts to license the findings for commercial purposes. Users who have access to this kind of critical data could easily copy it to unsecured mobile devices and transport it out of the protected network, compromising the data and potentially impacting the organization for large amounts of revenue. To protect against this kind of data loss and theft, organizations must have strong access controls around who can access information stored across their network, adopt Least Use Privilege policies to ensure that only the users who must have access, do, and for complex access requirements, consider implementing Data Loss Prevention (DLP) solutions which can provide a wide array of logging, tracking, access control, and other data access functions which can prevent a user, whether authorized or not, from exfiltrating critical information out of the environment.

Finally, when organizations begin to expand their workforces outside the confines of a well-controlled network housed in physical office locations, the more common, outdated types of defense strategies start to become difficult to implement and manage. Notions of a traditional Internet perimeter where a firewall can block out unwanted external traffic simply disintegrates when put into practice in today’s cloud-based and hybrid environments, and network admins now must wrestle with huge numbers of mobile devices all over the globe which are accessing corporate resources and are being connected to public and unsecured networks. This means that the potential number of devices which hackers can attack goes up dramatically, and the ways in which they can be protected starts to shrink.

It’s imperative that organizations find security solutions that will scale up alongside not only the sheer volume of additional devices being used, but the scope of where and when these devices are used to perform work. Leveraging cloud-based technologies to store data centrally can be one option, provided that sufficient technological controls and legal protections are in place. Additionally, more and more security vendors are providing strong cloud-based solutions which can scale up quickly and easily to identify and protect your devices wherever they are in the world and provide centralized management functionality to your internal IT staff responsible for controlling these assets.

While there are a number of challenges for all organizations as they move to and utilize a more nimble and mobile workforce, with proper planning, strong controls and using scalable cloud-based security technologies, they can reduce their overall risk of loss while dramatically increasing the security posture of the environment as a whole.

http://www.csoonline.com/article/3216469/mobile-security/security-on-the-move-protecting-your-mobile-workforce.html

Good synopsis here http://patentlyo.com/patent/2015/02/devices-yoda-2015.html from Dennis Crouch on the You Own Devices Act which will empower you as the owner of your system to get value for the investment in the system software when you go to sell to us.  Please contact your local representatives to voice your support for this act.  Here’s Dennis’ write up:

You Own Devices Act (‘YODA’) of 2015

Reps Farenthold and Polis today reintroduced the You Own Devices Act (‘YODA’) that I discussed in September 2014.  The provision attempts a statutory end run against end-user-license-agreements (EULA) for computer software.  The current and growing market approach to software to license rather than sell software.  That approach cuts-out the first-sale (exhaustion) doctrine and allows the copyright holder to limit resale of the software by the original purchaser and to impose substantial use restrictions.  That approach is in tension with the common law tradition of refusing to enforce use or transfer restrictions.  However, a number of judges have bought into the idea that existence of an underlying copyright somehow requires the favoring of “freedom of contract” over the traditional unreasonable-restraint-of-trade doctrines.

YODA addresses this issue in a limited way – focusing on transfer rights – and would provide someone transferring title to a computer with the right to also transfer an ‘authorized copy’ of software used on the computer (or transfer the right to obtain such copy).  That right would be absolute – and “may not be waived by any agreement.”  Even without the proposed law, courts and the FTC should be doing a better job of policing this behavior that strays far from our usual pro-market orientation. However, the provision would make the result clear cut.

In some ways, I think of this provision as akin to the fixture rules in real property — once personal property (such as a brick) is fixed to the land (by being built into a house), the brick becomes part of the land and can be sold with the land. In the same way, a computer would come with rights to use all (legitimate) software therein.

To be clear, YODA would not allow transfer of pirated software, but would allow transfer in cases where the owner has a legitimate copy but is seemingly subject to a contractual transfer restriction.

 

Farenthold is a Texas Republican and a member of the IP Subcommittee of the Judiciary Committee.  On Twitter, Farenthold quipped: “Luke didn’t have to re-license Anakin’s lightsaber, so why should you?”

lego-star-wars-games-to-play-on-computer-1oli8qnz[1]

About Dennis Crouch

Law Professor at the University of Missouri School of Law

Big Data has been the buzz word in play for some time and IBM is hopping on this trend as shops try to get a handle on how to handle “Big Data” in their operation.  IBM has announced their new Power Systems servers built on the Power8 processor are the perfect way for all to handle the needs posed by the hot buzz in the market.  It’s a welcome addition to the product line that is sure to move some organizations to upgrade to take advantage of the newest technology.  But is this latest machine a need for your organization, or are you better served not biting on the latest offering from IBM who is trying to capitalize on the buzz around Big Data?  Contact us for details on how a Power7 or Power6 processor may be the necessary upgrade you need that saves you up to 90% off of IBM list price.  In the meantime, here’s the news from IBM on the Power8 based offerings:

ARMONK, N.Y. – 23 Apr 2014: IBM (NYSE: IBM) today debuted new Power Systems servers that allow data centers to manage staggering data requirements with unprecedented speed, all built on an open server platform.  In a move that sharply contrasts other chip and server manufacturers’ proprietary business models, IBM through the OpenPOWER Foundation, released detailed technical specifications for its POWER8 processor[1], inviting collaborators and competitors alike to innovate on the processor and server platform, providing a catalyst for new innovation.

Built on IBM’s POWER8 technology and designed for an era of Big Data, the new scale-out IBM Power Systems servers culminate a $2.4 billion investment, three-plus years of development and exploit the innovation of hundreds of IBM patents — underscoring IBM’s singular commitment to providing higher-value, open technologies to clients. The systems are built from the ground up to harness Big Data with the new IBM POWER8 processor, a sliver of silicon that measures just one square inch, which is embedded with more than 4 billion microscopic transistors and more than 11 miles of high-speed copper wiring.  

“This is the first truly disruptive advancement in high-end server technology in decades, with radical technology changes and the full support of an open server ecosystem that will seamlessly lead our clients into this world of massive data volumes and complexity,” said Tom Rosamilia, Senior Vice President, IBM Systems and Technology Group. “There no longer is a one-size-fits-all approach to scale out a data center. With our membership in the OpenPOWER Foundation, IBM’s POWER8 processor will become a catalyst for emerging applications and an open innovation platform.”

You can read the rest here:  http://www-03.ibm.com/press/us/en/pressrelease/43702.wss

Till next time!

Interesting news here out of IBM.  They have licensed a Chinese manufacturer to be able to make their own version of the forthcoming IBM Power8 Chip.  Very interesting development for the market and it’ll be interesting to see how this changes the IBM Power landscape: http://www.enterprisetech.com/2014/01/21/chinese-startup-make-power8-server-chips/

IBM has added another member to its OpenPower Consortium, which seeks to expand the use of Power processors in commercial systems. The Chinese government has made no secret that it wants to have an indigenous chip design and manufacturing business, and the newly formed Suzhou PowerCore aims to be one of the players in the fledgling Chinese chip market – and one that specializes on Power chips.

The details of the licensing agreement between the OpenPower Consortium, which is controlled by IBM at the moment, and Suzhou PowerCore are still being hammered out.

The OpenPower Consortium was founded in August last year with the idea of opening up Power chip technology much as ARM Holdings does for its ARM chip designs. Thus far, search engine giant Google, graphics chip maker Nvidia, networking and switch chip maker Mellanox Technologies, and motherboard maker Tyan have joined the effort. In December, the consortium just got its bylaws and governance rules together and had its first membership meeting.

Brad McCredie, vice president of Power Systems development within IBM’s Systems and Technology Group, tells EnterpriseTech that the arrangement with the OpenPower Consortium gives Suzhou PowerCore a license to the forthcoming Power8 processor, and will allow the startup to tweak the design as it sees fit for its customers as well as get the chips made in other foundries as it sees fit.

Initially, Suzhou PowerCore will make modest changes to the Power8 chip and will use IBM’s chip plant in East Fishkill, New York to manufacture its own variants of the chips. The timeline for such modifications is unclear, but McCredie said that, generally speaking, it can take two years or more to design a chip and get them coming off the production line. Presumably it will not take that long for Suzhou PowerCore to get its first iteration of Power8 out the door, particularly given that IBM will have worked the kinks out of its 22 nanometer processes as it rolls out its own Power8 chips sometime around the middle of the year. The chip development teams of Suzhou PowerCore and IBM are working on the timelines and roadmaps for the Chinese Power chip right now.

The Chinese Academy of Sciences has six different chip projects that it has help cultivate in the past decade in the country. The “Loongson” variant of the MIPS processor, which is aimed at servers and high performance computing clusters, is one chip China is working on, as is a clone of the OpenSparc processor that was open sourced by Sun Microsystems before it was acquired by Oracle. This latter chip, named “FeiTeng,” has been used as adjunct processors in service nodes in the Tianhe-1A massively parallel supercomputer. The Loongson chips are in their third generation of development and are expected to appear in servers sometime this year.

So why would China be interested in a Power chip? “China is very large, and it has the resources to place more than one bet,” explains McCredie. “In the conversations that we are having with them, it is clearly much more pointed at commercial uses, whereas the activity we have seen thus far is much more pointed at scientific computing. This is going after big data, analytics, and large Web 2.0 datacenters.”

The initial target markets for these PowerCore processors are in banking, communications, retail, and transportation – markets where IBM has made good money selling its own Power Systems machines for the past several years. Suzhou PowerCore expects to see its Power variants in server, storage, and networking gear eventually.

Suzhou PowerCore is putting together the first chip development team that is working in conjunction with the OpenPower Consortium. It will probably not be the last such team if IBM’s licensing terms are flexible and affordable. Suzhou PowerCore is backed by Jiangsu Province and is located in the Suzhou National New & Hi-Tech Industrial Development Zone that is about 30 miles west of Shanghai. The Research Institute of Jiangsu Industrial Technology is given the task of building an ecosystem dedicated to Power software and hardware across China.

Incidentally, Suzhou PowerCore is a sister company to China Core Technology, or C*Core for short, which is a licensee of the Freescale Semiconductor M-Core and IBM PowerPC instruction sets. C*Core licensed the PowerPC instruction set from IBM in 2010, and its C8000 and C9000 chips are aimed at the same embedded markets as the ARM Cortex-A8 and Cortex-A9 designs. As of the end of 2012, C*Core had more than 40 different system-on-chip designs and had shipped more than 70 million chips for a variety of embedded applications, including digital TVs, communication gear, and auto systems.

The Power8 chip from IBM is expected sometime around the middle of this year. It has twelve cores and eight threads per core, which is 50 percent more cores than the current Power7+ chip and twice as many threads per core. Running at 4 GHz, the Power8 chip is expected to deliver roughly 2.5 times the performance of a Power7+ chip on a wide variety of commercial and technical workloads. The Power8 chip has 96 MB of L3 cache on the die and 128 MB of L4 cache implemented on its memory buffer controllers, and has 230 GB/sec of sustained memory bandwidth and 48 GB/sec of peak I/O bandwidth, which is more than twice that offered by the Power7+ chip.

It will be interesting to see what tweaks Suzhou PowerCore makes to this beast.

Not sure we are either, but Gartner has an interesting view that organizations can put the right practices in place today to build a data center to meet business needs indefinitely.  Some food for thought here from Gartner:

With the Right Practices in Place, a Data Center Built Today Could Meet Business Needs Indefinitely

STAMFORD, Conn., October 23, 2013 –

The increasing business demands on IT mean that data center managers must plan to increase their organization’s computing and storage capacity at a considerable rate in the coming years, according to Gartner, Inc. Organizations that plan well can adjust to rapid growth in computing capacity without requiring more data center floor space, cooling or power and realize a substantial competitive advantage over their rivals.

“The first mistake many data center managers make is to base their estimates on what they already have, extrapolating out future space needs according to historical growth patterns,” said David Cappuccio, research vice president at Gartner. “This seemingly logical approach is based on two flawed assumptions: that the existing floor space is already being used properly and usable space is purely horizontal.”

To ensure maximum efficiency, data center growth and capacity should be viewed in terms of computing capacity per square foot, or per kilowatt, rather than a simple measure of floor space. A fairly typical small data center of 40 server racks at 60 percent capacity, housing 520 physical servers and growing in computing capacity at 15 percent each year, would require four times as much floor space in 10 years.

“With conventional thinking and the fear of hot spots at the fore, these 40 racks, or 1,200 square feet of floor space, become nearly 5,000 square feet in just 10 years, with associated costs,” said Mr. Cappuccio. “A data center manager who rethinks his organization’s floor plans, cooling and server refreshes can house the increased computing capacity in the original floor space, and help meet growing business needs indefinitely. We will witness small data center environments with significant computing growth rates maintaining exactly the same footprint for the next 15 to 20 years.”

In this scenario, Gartner recommends upgrading the existing server base to thinner 1U (one unit) height servers or even sleeveless servers, while increasing rack capacity to 90 percent on average by using innovative floor-size designs and modern cooling methods, such as rear door heat exchanger cooling (RDHx), to mitigate concerns over hot spots. Implementing an RHDx system can also reduce the overall power consumption of a data center by more than 40 percent, since high volumes of forced air are no longer required to cool the equipment.

“An initial investment in planning time and technology refresh can pay huge dividends in the mid-to-long term for businesses anticipating a continuous growth in computing capacity needs,” said Mr. Cappuccio.

The evolution of cloud computing adoption will also provide relief for growing data center requirements and as the technology becomes more established, an increasing proportion of data center functions will migrate to specialist or hybrid cloud providers. This further increases the likelihood of an organization making use of the same data center space in the future, generating significant cost savings and competitive business advantages.

Gothenburg, Sweden – October 1, 2013: According to a new research report from the analyst firm Berg Insight, the global number of mobile network connections used for wireless machine-to-machine (M2M) communication will increase by 22 percent in 2013 to reach 164.5 million. East Asia, Western Europe and North America are the main regional markets, accounting for around 75 percent of the installed base. In the next five years, the global number of wireless M2M connections is forecasted to grow at a compound annual growth rate (CAGR) of 24.4 percent to reach 489.2 million in 2018.

The report highlights the connected enterprise and big data analytics as two of the main trends that will shape the global wireless M2M industry in 2014. “The world’s best managed corporations across all industries are in the process of mastering how connectivity can help improving the efficiency of their daily operations and the customer experience”, said Tobias Ryberg, Senior Analyst, Berg Insight. “Some of the best examples are found in the automotive industry where leading global car brands now offer a wide selection of connected applications, ranging from remote diagnostics, safety and security to LTE-powered infotainment services such as streaming music.”

Berg Insight believes that the next step in the evolution of the wireless M2M market will be an increasing focus on data analytics. “M2M applications generate enormous quantities of data about things such as vehicles, machinery or other forms of equipment and behaviours such as driving style, energy consumption or device utilisation. Big data technology enables near real-time analysis of these data sets to reveal relationships, dependencies and perform predictions of outcomes and behaviours. The right data analytics tools and the expertise on how to use them can create massive value for businesses”, said Mr. Ryberg. “Over the next 12-18 months we expect to see a series of announcements of new partnerships between mobile operators and big data technology leaders to address the vast business opportunities in this space.”

Download report brochure: The Global Wireless M2M Market

As we approach year end many IT departments are hearing the call from their users to pump up the systems to handle the end of year load. In some cases a server hardware upgrade can solve performance issues at a fraction of the price of a new server. Adding CPUs or memory can significantly increase a server’s performance. However, not all systems are upgradable, and upgrades don’t always fix poor-performing hardware. Moreover, new technology often provides new opportunities, allowing administrators to deploy more virtual machines or adopt more demanding workloads. Administrators should weigh these pros and cons and consider the expected return on investment and the impact on business operations before deciding whether to buy a new server or upgrade the existing one.

Would you like to take a test drive?  Here at ISM we rent all servers, storage, and networking hardware.  We have rental programs from 30 days to a year.  This way you can see if the increased performance is something you want to invest in or of it doesn’t meet your needs then simply send it back at the end of the rental period.  If you decide to keep it and you’d like to buy, then we’ll give you a credit for the rental payments and apply that to your purchase price.

Whatever your needs, we’re here to help!  Do you need more memory to speed processing?  Additional disk to handle the data?  More or faster tape drive solutions to handle the increased loads?  Contact us today and talk to us so we can help you with a viable solution that fits your needs!

Are you still running legacy servers and/or storage as you’re fearful of making the change?  Is the provider of your legacy software out of business or have they dropped support for you product?  We’re here to help you overcome those fears and turn a problem into an asset.  Resistance to change and fear of the unknown are part of the human condition; however, as we all come to know, growth rarely occurs without change.

Are you struggling to manage the older technology in your data center?   Maybe you’ve considered a move to something more efficient and less expensive, but the thought of migrating to modern technology is overwhelming or could lead to potential work disruptions.  This is understandable.  But now more than ever, the benefits and relative ease of upgrading to newer technology can have positive results for your team.

Modernizing from older systems to a newer modular and more efficient infrastructure has many benefits, such as:

  • TCO reduction results: From lowered acquisition and operations costs.
  • Administrative efficiencies:  From reduced hardware requirements, minimized overhead, and efficient and comprehensive tools.
  • Modernization: Results in longer protection of your investment, greater flexibility with third party innovations for Linux/Windows platforms, and less exposure to expensive proprietary technologies.

While migrations can be a challenge, we at ISM have streamlined the process with a proven structured approach—backed by our 18 years experience of providing assessment, execution and program management resources to thousands of successful migration customers.  Our skilled team will help you get it done by:

  • Up-front planning, risk mitigation and time & cost requirements before you make a decision
  • Migration tools, backed by experience, migration services, and best practices
  • Migration and infrastructure services that cover applications, servers, storage, virtualization, and networking

Do these services sound like something you need?  Get full details about migrating to updated technology, including technical/business white papers, case studies, and TCO calculators for migrations from your older technology to newer updated technology allowing you to provide your users with the forward progress they need!

Well, you save a lot of money, right?  But of course there’s more to IT than that.  Manufacturers always want you to buy the latest and greatest.  Some IT departments have the budget to do that and their users demand the newest bells and whistles available.  But most of us run our business on a budget and we don’t always need the latest and greatest because last year’s model may suit our needs just fine.

It’s kind of like that car I wanted when I turned 16, it was a brand new red Camaro.  It was slick, shiny, and went fast.  But did I need that?  Nope, I did better buying that Nissan that had 20,000 miles on it, saved me a bunch of money that I put away to save for college.  Short term I met my needs to fit my bigger picture long term goal.

If your IT budget is lean, then our refurbished servers, storage, and networking equipment offer high quality product at a low cost, and they all come with our industry leading lifetime warranty.  For some applications, the latest, greatest technology is a must.  In other cases, a refurbished product is simply a better choice.  For example:

  • Some business applications don’t require the latest performance
  • If your company’s application must run on a certain platform and “new” production on that platform has been discontinued by the manufacturer
  • Refurbished equipment offers a better price/performance value

What should I look for when I buy refurbished?  Here are a few tips:

  • Make sure you buy from a pro with the experience and quality control procedures to properly refurbish high end IT equipment
  • Check the warranty.  Are you getting it as is, or is the seller backing it with their own extended warranty?
  • Ask about return policies.  If you buy it do you own it, or can you return it to the seller with no questions asked?

At the end of the day, refurbished IT may be the soundest business decision you’ve made lately.  Let us help you find out how to save your IT budget!

Free shipping on all orders to the USA