This is default featured slide 1 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.

This is default featured slide 2 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.

This is default featured slide 3 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.

This is default featured slide 4 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.

This is default featured slide 5 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.

Three Phases of a Legacy Hardware Survey

Decisively or Indecisively Legacy?
Some product lines come with a well-defined roadmap that permits customers to anticipate and plan for upcoming end of service life (EOSL) announcements. This is generally not the case, however, and an edict from a manufacturer that customers must either upgrade or suffer the consequences is usually both unwelcome and unexpected.

Many businesses will simply move to the suggested upgrade product to avoid the hassle. Others will refuse to upgrade for the same reason, leaving them with legacy product that's in the asset mix out of short term convenience rather than long term strategy. But there is value in examining the continuing role of EOSL products in the datacenter, and in deciding to retain them when the objective evidence backs this decision.

Hardware is not going to brick up and stop working at midnight on the EOSL date. The real issue in EOSL is the end of support, rather than the end of service. EOSL products might lose callback and onsite guarantees and after EOSL date may only be eligible for unpredictable and costly T&M (time and materials) support. Keep in mind some OEMs may use a more fancy name for it, and bill it based on a subscription rather than time-of-service model; but if there is no assurance of a person with a part onsite within a certain time, there's not much comfort in that service level.

OEMs might also make it very simple, and offer no support whatsoever on some EOSL products depending on their policy. Other support options exist, of course: the EOSL product might be self-supported by certified techs on your own staff and aftermarket parts, or have full Service Level Agreement coverage under a third party support contract. If the product is relatively cheap or commoditized you may have a pool of replacements to draw on for some time after EOSL. Regardless, before you even begin a survey to determine what viability your Legacy product(s) have, you must have a clear support coverage solution stated and practiced. Doing nothing until there's no choice is not much of a policy, and will virtually guarantee that the reactive eventual solution is either expensive or inadequate, if not both.

A legacy survey is a three-phase exercise that might take an hour, or six months, depending on your level of access to metadata regarding existing assets and business processes. It may be trivial or overwhelming, and it highlights the value of good asset and systems management tools. Ideally your company has specific review/ PMO policies and follows standards like the ITIL/ITSM for handling these kinds of scenarios. But few businesses have a clearly defined process specifically for dealing with EOSL announcements and service level reductions; those that do may not equally address all three essential phases of an EOSL survey. This in no way is meant to suggest that this blog is a better practice than that presented in the ITIL or achieved by adhering to ITSM practices; it only reflects the reality that not nearly all businesses use them, and those that do, don't always adhere 100% to them.

What you have, what it does now, what it does best
Far from being a cryptic or unintuitive process, the key to a successful legacy survey is in applying objective analysis to optimize resource allocation on existing systems and compare it to resource allocation on a hypothetical replacement, or multiple hypothetical replacement scenarios. A virtualized and/or private cloud environment should start two steps ahead to begin with, because the first two phases of the survey are also necessary for successful virtualization, but a traditional datacenter where specific hardware resources are dedicated to specific processes in a set-it-and-forget-it manner may find asset and resource allocation data harder to come by. Of course, if you're using a public cloud then EOSL isn't your concern, but you can be sure your hosting providers are concerned with it.

Phase 1: Identify your assets and their capabilities

The first phase of the survey is in identifying your assets across the board in as much detail as is relevant and possible. If you use an asset management solution such as Kaseya or a home-grown dashboard, this is as simple as running a report or two; it may be trivial information you're already on top of. But it may also be an ugly mess of network configuration details, remote server closets in branch locations, or mysterious hardware you're not entirely sure about. In any such case, getting on top of the asset information can be a daunting task depending on the size and complexity of the datacenter; newsletters and websites dedicated to asset management exist for good reason. For the legacy survey you want to see everything you have available and not merely your legacy products, but if you really can't afford a full asset identification phase, you need to at least identify the legacy systems and the processes that they touch.

Phase 2: Identify your needs and capacities

The second phase turns analysis from identifying hardware and starts identifying need. In practice this can play out in a variety of ways - you can analyze resource allocation over time, at actual maximum use, at average use, or the like. This may be as simple as running resource allocation reports appropriate to the EOSL product - server usage details for EOSL servers, storage use and capacity for EOSL storage arrays, etc. The important thing is to collect as much hard data and actual numbers as possible in order to compare current reality with a future possibility, using the speeds and feeds information about the replacement products supplied by the vendor(s). This should be easier, if not trivial, for a virtualized datacenter compared to a traditional one. but if you don't already have a good idea of the resource hogs and repeat offenders in the datacenter, it may be both a difficult and illuminating undertaking. In either case you're not looking for a real-time ticker on what's happening now, you're trying to determine if processing and processes are already allocated to the best hardware for the job, and if so, what that looks like, by the numbers. Once you have this data you'll have visibility that would be impossible to achieve without completing the first two phases.

Phase 3: Compare current metadata with potential future replacements

In the third and final phase, bring the data in the first two phases together to determine the objective, demonstrable value of retaining legacy systems versus replacing those legacy systems with new. By completing phase 1 and phase 2 before analysis begins, you are able to compare apples to apples and determine adequacy and cost factors using real data relevant to your business. This standardization provides mathematical evidence beyond cost to back a decision, rather than forcing you to trust a hunch. Without it, you're much more vulnerable to marketing hype or peer pressure, neither of which should be trusted when they contradict objective fact. When evidence is lacking, flipping a coin is a valid means of reaching a decision, but it's a better practice to make sure you actually collect and use the available evidence instead. And since every hardware product eventually passes into EOSL, standardizing the collection and analysis of asset and process data is a worthy long-term goal.

Keyword Tags: legacy server storage networking ITSM ITIL EOSL Disclaimer: Blog contents express the viewpoints of their independent authors and are not reviewed for correctness or accuracy by Toolbox for IT. Any opinions, comments, solutions or other commentary expressed by blog authors are not endorsed or recommended by Toolbox for IT or any vendor. If you feel a blog entry is inappropriate, click here to notify Toolbox for IT.

View the original article here

Disk Cleanup

The disk cleanup utility is equipped with all Microsoft Operating Systems from at least the Windows 2000 Professional. You can find this utility located in All Programs in the Accessories folder. Then go to System Tools folder and select the Disk Cleanup. Once you double click on this utility it will start a scan of your system to see is active and then give you a basic list of the different system folders which it can cleanup but you must select which folders to empty. If you’re uncertain then you’re alright by selecting all of the folders especially if you hadn’t run this utility ever.
Note: Make sure before you run any of your cleaners to always backup any data that you might have for a just in case scenario of a system jam / failure but in most cases this scenario never really happens. But you do want to make sure to book mark / Favorite your history items because after you select everything on the disk cleanup it will empty your Internet Explorer history in the address bar. Best practice is to always make sure you keep your favorite locations saved in your Favorite (bookmarking) in the Internet Explorer.
After you had made your selection then run the utility and once it’s done it will automatically close itself. This utility doesn’t touch third party utility program folders such as another web browser’s history folders. If you’re using other web browsers then you can use the manual cleanup (I’ll discuss latter in another blog). You can also use third party disk cleanup utilities but be sure you understand what the utility will cleanup and which operating systems its intended to be used. Plus I highly recommend making sure you have a complete backup before you do any third party cleanup utilities due to some of their complexities might cause your system to not function correctly. In this blog I just want you to understand HOW TO USE YOUR SYSTEMS own tools which does a very basic and safe job. Most professional computer techs use various tools for different cleanup approaches and they understand how to use these tools. It’s not recommended for you to try some of those tools unless you understand the tools use and its consequences. If you have any questions feel free to contact me at support@pctech4u.org and any suggestions on how to use some of your systems basic utilities.
May you have fun computing. Thanks Erik for the helpful info. Congrats on your first blog post at Toolbox for IT! The Community looks forward to reading more from you.

View the original article here

Alienware M11xr3

 



 


 


 


 


 


 


 


TypeGaming, Media, Gaming UltraportableProcessor NameIntel Core i5-2537MOperating SystemMicrosoft Windows 7 Home PremiumProcessor Speed1.4 GHzRAM4 GBWeight3.3 lbScreen Size11.6 inchesScreen Size TypewidescreenGraphics CardnVidia GeForce GT 540M2nd Graphics CardIntel HD Graphics 3000Storage Capacity (as Tested)500 GBNetworking Options802.11nMore


The Alienware M11xr3 ($1,099 direct) is the next in the company's line of 11-inch gaming laptops. Last year the Alienware M11x ($1,175 direct, 4 stars) turned heads with its netbook-sized design outfitted with gaming-grade Nvidia Optimus technology. This year's model upgrades to a second-generation Intel Core i5 CPU, a more powerful Nvidia GeForce GT 540M graphics chipset, and more ways to connect to your favorite games online via Wi-Fi and WiMAX, which is why it's our Editors' Choice for portable gaming laptops.


Design
Design-wise, Alienware is not known for its subtlety. And sure enough, the M11xr3 sports glowing blue LEDs (can also be configured with Astral Aqua, Mars Red, Nova Yellow, Terra Green, and Plasma Purple lighting). Between that, the backlit keyboard, and glowing Alienware logo and grills on the bottom front of the chassis, you won't have any trouble finding this laptop in the dark. The rest of the M11xr3 is coated in a soft rubberized textured material over its Steath Black magnesium frame. The interior has a textured plastic design along the palm rest, which offers a comfortable amount of friction when typing. The trackpad has a smoother textured pattern, and has a slight depression to distinguish it from the palm rest.


The 11-inch widescreen displays in 1,366 by 768 resolution, which is enough to take in a game or movie, but leaves little room for multitasking. Unfortunately, like the previous M11x, the M11xr3 suffers from the same glare problem, which becomes more noticeable when gaming under bright fluorescents. Being that this laptop is netbook-sized, the keyboard smaller than the full-sized ones you'd see on 14-inch laptops like the Samsung QX410-J01 ($829.99 street, 4 stars) and Asus U41JF-A1 ($857 street, 4.5 stars). For me, the tighter typing experience was easy to get used to. The M11xr3 is heavier than your typical 11-inch screen laptop weighing 4.47 pounds, whereas the Lenovo ThinkPad X120e ($580 direct, 4 stars) weighs all of 3.3 pounds. The increased weight is understandable, as the M11xr3 is packed with more internal hardware.


Features
The M11xr3 doesn't have an optical drive. With the rise of digital downloads and Steam accounts (which comes pre-installed) one is hardly needed, especially, when it comes equipped with WiMAX (a wireless technology that offers similar coverage not unlike that of 3G and 4G smartphones) and Wi-Fi to connect you to your downloads and MMOs when you're without an Ethernet port to jack into. The M11xr3 also comes with a DisplayPort, HDMI, Ethernet, 3-in-1 card reader (MMC, SD, MS/Pro), two USB 3.0 ports, one USB 2.0 port, and FireWire. The USB 3.0 ports will come in handy if you ever run out of space on the 500GB 7,200rpm hard drive, as it can provide faster data transfer speeds when hooked up to a USB 3.0 external hard drive. Also, the DisplayPort, HDMI, and WiDi 2.0 a wireless display technology that allows you to stream your computer's content to an HDTV (provided you have the Netgear Push2TV receiver), all allow you to hook up to an external monitor, so when you're at home gaming you don't have to limit yourself to the 11-inch screen.


Performance
Alienwar M11xr3 The M11xr3 comes fitted with a 1.4GHz Intel Core i5-2537M (1.4GHz) second-generation, dual-core CPU. It's a low-voltage processor, which eats up less battery life compared with the standard-voltage ones. There's also 4 GB of RAM and an Nvidia GeForce GT 540M graphics chipset. On our PCMark Vantage (6,183) and Cinebench R11.5 (1.69), it fared better on the former rather than the latter when compared against other laptops with similar CPU power. It beat out the MSI FX420-001US's ($800 street, 4 stars) 5,913 PCMark score, though, not by much. But it couldn't best the MSI Fx420's Cinebench score of 2.6. Other results in our time-based image and video encoding test, like Photoshop CS5 (5 minutes 23 seconds) and Handbrake (3:33) showed that the M11xr3 couldn't quite compete against the MSI FX420 and HP Envy 14-1210NR ($999.99 list, 4 stars). But processing power isn't supposed to be the M11xr3's forte—gaming is.


In 3DMark 06 (8,652 Medium Quality and 6,500 Hight Quality) it bested the MSI FX420 and HP Envy 14 by at least 1,000 points in Medium Quality settings and a few hundred at High Quality settins . In our Crysis DirectX 10 gaming test the M11xr3 was able to play it at 55.6fps on 1,024 by 768 resolution. In Lost Planet 2 (DirectX 9) it played at 39.9fps (1,024 by 768) and 20.6fps (High Quality). However, it should be noted that when I was playing Portal on my desk, the M11xr3's temperature reached 98 degrees, as measured on the underside of the laptop with a Fluke Thermometer. So if you regularly put your laptop on you're, um, lap, then you may want to consider getting one of those lap desks—unless you like the idea of toasted skin syndrome.


The M11xr3 features a 63Wh battery and combined with the low-volt CPU an dNvidia's Optimus technology, which automatically turns off the discrete GPU when performing non-3D related tasks and utilizes the integrated graphics instead, it managed a battery life of 8 hours 3 minutes.. Other laptops with discrete graphics did not manage as well, like the MSI FX420 (5:18), HP 14-1210NR (4:30), and Samsung QX410-J01 (6:27). The only laptop that was able to best the M11xr3 was the Asus U41JF, which lasted only 16 minutes more.


Alienware no doubt rules the portable gaming laptop space, and it has had little competition (unless you count Razer's Switchblade prototype). The only laptops that come close are in the ultraportable category, like the Asus U41JF-A1. However, the Alienware M11xr3 caters to a niche crowd and while it may not offer the best performance in every test, it excels in the areas that matter most: Portability, gaming, and battery life. That's why it earns our Editors' Choice in the portable gaming laptop category.


BENCHMARK TEST RESULTS


COMPARISON TABLE
Compare the Alienware M11xr3 with several other laptops side by side.

More laptop reviews:
•   Alienware M11xr3
•   HP Pavilion dv7-6143cl
•   Asus Eee PC 1015B
•   Samsung Chromebook Series 5
•   Toshiba Tecra R850-S8540
•  more


View the original article here

When the cloud is too puffy: 5 tips for avoiding costly inefficiencies in a cloud migration.

First, let's define our terms. "Cloud" is one of those lovely marketing terms that everyone has a definition for but no one can really define. While you can talk about the 'Private Cloud' (Like EMC does) or 'Cloud-Based Apps' (Like Salesforce.com does) - and be perfectly correct in doing so - notice that in both those cases the word is either modifying or being modified by some other descriptive term. The cloud concept itself is something separate, and it goes way back to the days when the internet was a newfangled concept that had to be slathered on every piece of marketing. Doesn't that sound familiar? Basically you'd have these architecture diagrams where you'd connect servers and clients and databases with lines and then put another line leading to a cartoon cloud, like a thought-bubble, labeled 'internet'.

2007 diagram showing the internet as a cloud

(Source credit: http://www.softwareprojects.com/resources/programming/t-converting-a-standalone-server-application-to-a-web-1321.html)

In the years since this practice started we've sliced and diced the term up into a number of broad but distinct concepts, and in only a few of them could you possibly confuse the cloud with the internet. On one end, the private cloud, the term gets mixed up with virtualization; and on the other end, Software as a Service (delivered via the web) does exist on the internet but it is not itself the same thing as the internet.

In any event the first tip for keeping cloud expenses under control is to 1. Clearly define what you mean by cloud in your specific circumstances. This goes beyond keeping job descriptions and department budgets neat and tidy; as is evident in the recent report by the CIO of the United States, Federal Cloud Computing Strategy, it is far too easy to go down the rabbit-hole of leaving terms open-ended and thus finding your end-goals undefined. This leads to equivocation rather than decisionmaking when choices must be made. The paper goes into specific but useful detail regarding the broad concept, and it may be a useful template to you in your migration or management efforts.

2. Keeping old systems and processes is not the same thing as archiving old data. Far too often, because of an insufficiently powerful migration team or a poorly defined reduction goal, companies are stuck with mildly or wildly hybrid cloud solutions because of unwillingness to part with old hardware or software. This may be as simple as keeping the old email server and system even when a new web-based service is brought on, or retaining the old customer relations mamagement system after adopting a cloud software as a service rather than migrating the old data into the new. It may be easier than a data conversion, and since you've already paid for the old hardware/software it may be less costly; but it is never going to be as clean and manageable as having only one system to maintain, and the maintenance price will become a growing and ongoing burden over time.

This is a particular issue when we're talking about moving from a traditional datacenter to a cloud model. Sometimes the talented people doing the labor are beholden to a person in the approval chain who is too fearful to turn off the old storage arrays even after they've been migrated - again, in the interest of 'archival use', and/or out of fear of noncompliance with HIPAA or SOX. if you don't turn off the old stuff, you've only increased your expenses and complexities while reducing your visibility of and control over your data. On the other hand, regulation compliance is no simple matter, and you may have legacy boxes or other special circumstances to deal with. If so, you can usually find a reasonably priced and highly qualified maintenance solution via third-party providers who can handle your existing post end-of-life hardware, even if the original manufacturers would prefer to further complicate things by demanding hardware upgrades first.

3. Avoid cloud for the sake of cloud. The paper cited above from the CIO of the USA arguably gets caught up in this very pitfall, because after citing legitimate but general benefits, the conclusion drawn is that all systems that can be converted to cloud must be converted to cloud. When the initial premise is that everything has to be completely 100% cloud-based, you have to step back and ask why - the benefits of cloud computing are in no way contingent on the incorporation of cloud technologies across the board, and 100% cloud is not the same thing as 100% one-point-of-contact efficiency. Cloud application services may come from dozens of companies, and many cloud storage customers choose a different provider for cloud backup, or opt for a private cloud for one and the public cloud for the other. Instead of choosing a model and then forcing the organization to adapt to it, look for those areas of most inefficiency, highest cost, and least profitability. Those are the priority targets for cloud conversion. If other aspects of the business are operating optimally, there may be little to gain by converting them to a cloud model. In other words, if it's not broke, it's legitimate to at least ask why it's best to fix it, rather than assuming that all things cloud are automatically better in every specific case simply because the cloud has general cost, management, and maintenance advantages.

4. Understand the security and availability levels. You should approach a cloud service or hosting relationship as though you are putting that aspect of your business into the hands of the least qualified, least ethical, least reliable person at that company. When you make this assumption it's often more clear whether you're going to have trouble or not, depending on how complex the relationship is and how dependent on individual personalities rather than impersonal features the service is.

In a traditional data center you hire and assign your own employees, and can dismiss them if they fail to meet your requirements. The people involved have an accountability to the executives which is both an incentive to do a good job (get paid, get raises, get recognition) and a decentive against really fouling things up (get demoted, get fired, get humiliated). None of these things exist between your company and an outsource, and that's the case not just for cloud services but for any outsourcing you may do, from telemarketing to shipping.

It's important to remember this, especially in the RFP and consideration phase, because other subscription-based outsourced tasks are usually commodities. When your only recourse short of major legal action - hardly a quick fix - is to cancel your subscription, you may find yourself uneasy handing over control to an outsource. On the other hand, having a strong business ally and advocate is a relationship worth pursuing. It is imperative that you work closely and comfortably with your choice rather than taking a hands-off route, however.

A key tactic for achieving peace-of-mind is to request that bidders explain in their proposals what practical difficulties may be reasonably expected if the relationship fails or their company is aquired. This goes way beyond contractual wording about responsibilities; you need to know how hard it will be to get control back or transfer control of the service to a different business. Can you securely and immediately retrieve all your current and backup data, or is it lost if you decide to end a cloud relationship with the host or service provider?

Even when the cloud tool is as simple as a CRM or email marketing system, it's critical to maintain a degree of independence because you can only be responsible for, and reasonably confident in, the business practices and long term viability of your own company. Along the same lines, you must be certain that your provider maintains compliance with the regulations that your business had to comply with before moving the service to the cloud. Assuming that your provider understands your company is taking a risk. Hosting businesses are not healthcare businesses, and they do not understand the specific needs of your vertical any better than you understand the specific needs of theirs. Ultimately you should not enter into a long term agreement that leaves you on the side of the curb with hundreds of other disgruntled customers when a provider is acquired or makes a major policy change.

5. Understand the pricing model fully. Especially in the case of services involving a subscription model, it's critical to understand exactly what the breaks are between tiers of payment (and when applicable, tiers of services as well). The time to find out whether your provider is charging by the month, by the record or by the gigabyte is before you've decided on anything, not after you've gotten an invoice that sends accounts payable into the stratosphere. When IT controls these decisions this is simple enough, but is sales responsible for the CRM or the customer database? They need to know what to look for, or your policy should require these kinds of one-off purchases to get IT approval as well if you don't already have a set policy regarding this.

Under normal circumstances this is all part of the proposal consideration phase, but too often all that's noticed then is how the pricing compares to other providers. Pricing should be considered in its own right relative to annual budgets. Ideally, multiple scenarios should be priced out in writing by the provider so that the effects of different circumstances can be seen by all, and so budgeting can be adequate rather than insufficient. Running a scenario also helps to expose potential bottlenecks or hidden cost bloat pitfalls.

Refunds and make-good time for outages should be discussed as well. Cash or free services are the normal compensation in case of an error on the part of the provider. In particular make sure you know what documentation is necessary for reporting a problem that isn't immediately obvious; downtime is not the only thing that might go wrong, and when your hardware is offsite it's much harder for IT teams to identify and correct configuration or reporting problems. Here too it's important to involve employees outside of IT who may be making cloud purchasing decisions when it comes to web-based services for their department.

This list could probably be twice as long, easily. What other things do you think are helpful to keep in mind when trying to manage a cloud migration project in a cost-effective way?

Keyword Tags: cloud  efficiency  storage Disclaimer: Blog contents express the viewpoints of their independent authors and are not reviewed for correctness or accuracy by Toolbox for IT. Any opinions, comments, solutions or other commentary expressed by blog authors are not endorsed or recommended by Toolbox for IT or any vendor. If you feel a blog entry is inappropriate, click here to notify Toolbox for IT.

View the original article here

Locavore (for iPhone)

 



 


 


 


 


 


 


 


 


 


 


Across much of the U.S., the agricultural growing season peaks in June and July, with a bountiful harvest time in September and October. During peak growing and harvest season, locavores, or people who only eat locally-grown and produced foods, can finally fatten up. The Locavore app (free) for iPhone and Android-based phone users, helps U.S., Canandian, and U.K. residents identify the fruits, vegetables, legumes, and nuts that are in season for their state or province. The app takes a visual approach, incorporating maps, pictures of foods, pie graphs, and charts, all with rich colors as vivid as those you'd find in a farmer's market.


While few people take the locavore challenge to the extreme and shun anything imported from more than 100 miles away, many people see the appeal in adding more seasonal and locally-sourced ingredients to their diet when they can. When your food is locally sourced, it spends less time traveling in a truck, which means less fuel is being used to transport it, and that the food can be harvested when it's ripe (ripe produce doesn't easily survive a cross-country or overseas journey from farm to supermarket, so it's usually picked and shipped underripe, resulting in less flavorful and possibly less nutritious food).


What's in Season?
Download and install the free app, then start by getting acquainted with the five-button navigation bar that runs across the bottom of the screen: In Season, I Ate Local, Markets, Browse, and About. The In Season button calls up a list of ingredients that are currently ripe and available to buy in your state based on your GPS location, and the approximate length of time they have left in the market. The foods are listed in order of how much time they have left, shortest to longest. In mid-June, when I loaded this screen for New York, strawberries were at the top (two weeks left) and potatoes were at the bottom, with 10.5 months left, being nearly available year-round. Below the list of what's available is another set of food that's coming in season soon: cauliflower, lettuce, raspberries, and so on. Tap any of these items, such as "peas," and the app launches an interactive map that points to both farms and markets that sell the locally grown item you want.


At the very bottom of the In Season screen is the option to pick a neighboring state to see what's in season there. This feature is a great option for people who live on the border of multiple states. For example, in the New York tri-state area, we rely heavily on agricultural products from New Jersey, Connecticut, and Pennsylvania.


The I Ate Local button is Locavore's attempt at providing social interaction. Users login via Facebook and share a short post about what they ate and where. The I Ate Local screen can show posts from people within 100 and 500 miles of your location, or from "the world," meaning everywhere. While this section of the app may be showing you posts from complete strangers, some of them may inspire you to cook something new or try an ingredient you've never tasted before.


Tap Markets to pull up a map that shows markets in your vicinity selling all kinds of seasonal and local food—not just the ingredient you need. In testing this piece of the app, I recognized most of the big "green markets" (a classification New York City uses for some kinds of farmer's markets), but none of the produce shops in the neighborhood where I live. It's possible these markets were left off because they do not exclusively sell local ingredients (bananas don't grow in the continental 48, my friends). However, I also saw a few outliers on the list, locations that appeared to be the headquarters or offices of some market group, but certainly not a shop.


The Browse button had my undivided attention when I first started to play with it, until some of its options turned up dry. You can browse fruits and vegetables, places, recipes currently in season, and recipes coming into season. Browsing is what it sounds like, a way to explore foods, markets, and recipes without the restrictions of looking at what's available and in season near you. But behind the two recipe butons, I found nothing at all. Until that part of the database is updated, Locavore should just remove those buttons. They tease users with hands-on and practical information about what to do with the food that they buy.


In the About tab, users can find resources that the developers used to information into the app. Users can also override the GPS location by entering their ZIP or post code here.


Short Shopping List
As locavores know, produce isn't the only seasonal or locally-produced food. Cheese, meats, and even beer and wine can also have specified times in the annual cycle when they are produced, aged, and ready for consumption. Locavore is off to a good start with fruits, vegetables, nuts, and legumes, but it falls short right now on helping people stay informed about what it means to eat locally and seasonally. A full diet needs to consider these food and drink products, too.


Locavore for iPhone and Android might be able to guide people toward seasonal produce, but it's really missing its mission by not following through with recipes, and a more thorough appreciation for the seasonality of meats, cheese, and beverages. Locavore is heading in the right direction, but it still has a lot of work to do. I don't know of any other free app that takes on this challenge, so until one surfaces, use Locavore for information about fruits and vegetables, but bear in mind that there's more to the local story.


More iPhone App Reviews:
•   Locavore (for iPhone)
•   Instagram
•   Numbers for iPhone
•   Pages for iPhone
•   Square
•  more


View the original article here

Music Unlimited (powered by Qriocity)









With Spotify hot on its heels, Sony has joined the busy, cloud-based music market with Music Unlimited (powered by Qriocity). The service has a substantial 7-million song library from the big four labels, two competitively priced monthly plans ($4/month and $10/month), and plenty of ways to listen, including Sony PSPs and PS3s, Bravia HDTVs, home theater systems, and even an Android app. Whether these assets will be enough to overcome the service's glitches, limitations, and late-arrival is unclear.
Music Unlimited faces daunting competition. There's the iTunes pay-per-song juggernaut, which looks even stronger with cloud-based availability and the forthcoming iTunes Match service for $25 a year. The now-public Pandora Radio is flush with cash and owns streaming Internet radio and some killer tech, namely the Music Genome Project, an in-depth taxonomy and complex algorithm that reads hundreds of musical "genes" and like-sounding songs. And, when it comes to paid subscription services, Sony will have to challenge comparably priced but device-agnostic services Napster and Rhapsody.
With no shortage of competitors, Music Unlimited needs a double-platinum debut to stay on the air. What has today is a good start, but hardly number one with a bullet. The web-based interface is impressive, as are the ways to experience the content. That may not be enough to persuade consumers to stray from established services, but, paired with Sony's brand in consumer electronics and gaming, Music Unlimited has a shot.
The Setup
Sony offers two plans: the Basic, $4 per month plan, and the Premium, $10 per month plan. Both plans allow you listen to music on multiple devices (see my review of Music Unlimited Android App), transfer existing music into the cloud, and create your own mood-based channels; however, the $10 plan is the play you want because it offers customizable playlists, your own personal library, and unlimited playback (a must-have for a subscription service). I began testing Music Unlimited using the 30-day free trial of the Premium account. The sign-up process was straightforward, though I was a bit annoyed that I had to submit billing information to begin the trial; this is something to watch if you decide not to embrace the service.
The Sync
While Music Unlimited is browser-based, some functionality is Windows-specific. For example, I wanted to use Music Sync to transfer my existing library into the cloud, but the feature requires a Windows-only plugin that you can only install for one computer per account (an issue if you have music scattered across several desktops). I transferred one album (Beck's Modern Guilt) onto my Windows desktop and installed the plugin. While Music Sync identified all thirteen tracks in my iTunes library, it only transferred the first two. I have tried pausing and restarting the transfer, rebooting my machine, and reinstalling the plugin, to no avail. Compared to Amazon's Cloud Drive, Music Sync needs work. I would love to see Sony take a page from Amazon and make transferring music into their cloud simpler, more reliable, and browser-based.
The Library
Despite the trouble I had getting my music into Sony's cloud, I found most of what I wanted in Sony's library. Sony claims 7 million songs from all four major labels, much more than Pandora (800,000 songs), though a far cry from Apple (18 million songs), Amazon (16 million), Napster (12 million), Rhapsody (11 million). Using market and library leader Apple has the baseline, I searched for the top 10 singles and albums on iTunes, and I found them using Music Unlimited. Less mainstream choices were hit and miss: For example, I found Beirut's just-released East Harlem, but I could only locate six songs from Sufjan Stevens—a small fraction of his ten albums worth of material.