Grant Mongardi

Recent Posts

When $10 Billion companies fail at security, how can I succeed?

Posted by Grant Mongardi on Mon, May 08, 2017 @ 10:11 AM

Tags: Security, MFA, Phishing

CoinsMoney.png

This month it was reported that Google and Facebook were both victims of a phishing attack that netted the Lithuanian man perpetrating the scam in excess of 100 million dollars. So let's get this straight; two of the largest technology companies in the world, both of whom probably have some of your private data stored in their technology, fell victim to a phishing attack. These are companies that make 10s of billions of dollars every year in revenue. They have 10s of thousands of employees, a large portion of which work solely on the security of their products. So how on earth can your tiny IT department by comparison protect your small-to-medium sized business?

Training your employees to be aware of these things just isn't enough anymore. You must implement mechanisms that not only prevent this sort of social engineering, but do it in a way that allows your employees to continue to be productive and effective. Informing the employees at your company of the dangers of phishing scams is definitely valuable, but doing that and showing how the mechanisms that you've implemented help them prevent that ensures that every time they use your solution they'll be reminded of why. This helps ensure continued diligence on the part of those users. It's no longer just the "draconian IT overlords" making their lives more difficult, it becomes a story of how you helped them save their job.

By adding mechanisms such as multi-factor authentication, oath tokens, and mobile device management among other processes and procedures, you can prevent if not completely negate the likelihood of one of your employees falling prey to one of these phishing scams as these 2 technology giants did. Seriously, you can do this in a way that fits your budget and still lets your employees be productive and effective.

Whether your trying to prevent such a security nightmare or simply trying to ensure you pass an audit with a gold star, NAPC can help you choose exactly the solutions you need to make this all work. Contact NAPC today for a free consultation or demo of our offerings.

Joe Sent Me. Multi-Factor Authentication for the Roaring 2020s.

Posted by Grant Mongardi on Thu, Dec 08, 2016 @ 10:31 AM

Tags: Security, AD, Centrify, cloud security

JoeSentMe02.jpg

In the days of Prohibition getting your Appletini was much more difficult than it ever should be. Foremost was the fact that they didn't exist. Other than that you would need to know where a speakeasy was, have the password ("Joe sent me"), and not be a copper (the law enforcement kind, not the British penny). In fact this was Multi-factor Authentication: something you know, something you have, and something you are (or are-not in this case). NAPC can help you revisit these roaring 20's but for the 2020s, and perhaps help you cut down on your Appletini consumption in the process.

Multi-factor Authentication or "MFA" is one of the buzz-words of 2016. Everyone is saying it but many people don't quite understand what the mechanism means to their security. It doesn't just mean that it protects against the brute-force attacks we've now had in our system logs for years, it also means more elaborate exploits can be mitigated

Take this years huge growth in spear-phishing attacks. Some of the largest, most security-concious corporations and government entities fell prey to spear-phishing attacks that involved advanced social engineering combined with compromised email accounts. For the attacker access to the victims email account means that they first analyze the communications in the victims inbox and craft an email interaction masquerading as someone the victim knows and convince them to do something that they wouldn't otherwise do, such as wiring money or sending proprietary information.

Add multi-factor to Windows logins

There are many things you can do to help avoid such a scenario, however MFA is probably the most effective. If all of your internal and privileged resources are protected by MFA then you've short-circuited the attacker at the outset. If the attacker doesn't have the 2nd factor for your email account login then they can't even get started. It doesn't matter if they have password, the 2nd factor prevents them from ever using it.

But attacks  aren't just limited to email. Cloud services are effected as well. Whether it's your office suite, your CRM suite, or even your accountiing solution, you really must protect all of your corporate services. Any little bit of information about your company or internal processes can give a smart attacker a leg-up on how to convince your employees into doing something that they probably wouldn't normally do.

Lastly there is what's inside your firewall: people. People have flaws, and mitigating those flaws can make your work an endless nightmare. Many of the things that an employee might do to ruin your weekend aren't necessarily intentional, but nonetheless can make you wish you could crush their skull with a rock. Ok, perhaps that's a bit extreme but the sentiment is similar. Preventing users from being their own worst enemy can extend inside your firewall as well. Adding MFA to servers, desktops, even network hardware will ensure that nobody inadvertantly has access to something that they shouldn't. Also, sharing their account information "for convenience" becomes useless. Also by adding MFA to a tunable privilege escalation mechanism you ensure that they are both the actual person that they say they are and are allowed to do what they are doing, and for everytime they do it. 

Contact NAPC today and we can help you navigate all of these issues and address any concerns you might have with reliable, secure solutions from Centrify. NAPC has the expertise in all of these technologies and experience in addressing all of your issues. Whether you have an audit pending or have had an "event" that you need kept confidential, NAPC can help.

Learn More about securing your Enterprise with MFA 

DAM, MAM and GLAM!!!!

Posted by Grant Mongardi on Mon, May 02, 2016 @ 10:53 AM

Tags: BAM, Xinet, DAM

THIS is the Xinet User Interface you've been asking for!

Glam02.png

Create visually stunning, mobile friendly, and state of the art interfaces for each of your customers and end-users with E6 from NAPC. We've spent years developing for and extending the underlying toolset to meet the needs of our hundreds of customers in order to bring you a fully customizable UI with tools that you could only dream of.

We've built this interface from the ground-up based upon our years of experience with Digital Cybrarians, Studio Managers, Retouchers, Video Production departments, Brand Managers, and Printers. We've done our best to add all of the tools and UI enhancements that our customers and their customers have asked for. And we're adding features everyday. Here are a few screenshots showing just a touch of what you can get from Xinet when you use E6 from NAPC:

A Zoomable, pinchable, dragable viewer!
GlamViewer.png

Mouse-over large previews!
Glam01.png

A swipe-able, side-scroller Gallery View with popup metadata!
Glam03.png

A Drag-N-Drop Lightbox for basket integration, side-by-side compare tool, emailing and file management!
GlamDnD01.png

 

Call NAPC today to schedule a full demo of our product suite. This is something you should really get excited about. We are!

 

Zero is HUGE!!!!

Posted by Grant Mongardi on Wed, Nov 11, 2015 @ 02:00 PM

Tags: Security, AD, IT, cloud security

Zero Sign-On for Zero hassles. Simple solution to everyday frustration.

Wait? What did I just say?
Yes, it sounds like I've gone completely crazy, but the kind of zero I'm talking about is huge. For everyone!

I'm talking about Zero Sign-On. You might be saying "I've heard of Single Sign-On (SSO) Grant, but what the heck is Zero Sign-On?". Zero Sign-On is the idea that if you can identify the device being used to connect then you can assume that device belongs to and is controlled by someone you know, and as such can let them connect without actually having to type a password. It's physical security, much like a door key or pass-card is. If I know that your mobile phone or tablet is owned and controlled by you, then I should have no problem using that device as the unique identifier indicating that you are the one trying to connect. Better yet, if I know the device is controlled by both you and me, I can be very comfortable in asserting to anyone that I can control access of both the device and the end-user.

"So Grant, how does all of this work?". In short, by uniquely identifying and then "tagging" that device, be it a phone, tablet or even a netbook, then you can use that as a pass-key to getting into protected resources without having to type a password. The device uniquely identifies you as you, rather than a user/password combination. Not only can it not be "hacked" without the actual device, but it can't be easily "shared" like a user/password can.

"Yeah Grant, but what if someone steals it?". Well, with a proper service like Centrify's IaaS Cloud service for this then all of that should be taken care of. Centrify's offering lets the user register their own devices under their user account. In addition to using it for Zero Sign-On and changing forgotten passwords, it also lets them find the device on a map, lock it remotely, wipe it remotely, and even see what the battery charge level is. But more importantly it lets you, the IT or Security Administrator do important things like apply group policy to the device (like encrypting storage, screen-lock time, passcode length/complexity, etc), unenroll the device and disable Zero Sign-On, and lock or wipe the device.

Centrify Cloud lets you find your lost device, lock it, and even wipe it!

Finally, it let's you see and report on the device's activity and even see if it's been jail-broken and is being backed-up to the Cloud!

"So what about the user's laptop?". Well, if that user has a laptop capable of IWA then the user can use that for Single Sign-On, allowing them to access their services without typing their password again. Centrify DirectControl for Macs will enable IWA on Apple Macs and it's built into Windows, so they just login to the laptop and they're done.

So a few of the best "Zero"s are: Zero support, Zero audit findings, and Zero shared credentials. And that all translates into infinitely better security and tighter controls over your valuable corporate resources.

For more information on Centrify Identity Service or other great products from Centrify just contact us at TheExperts@napc.com and we'll be happy to give you a full demo. We'll also be having a Webinar on Elegant 6 SAML and Centrify's Cloud service on November 19th, 2015 at 2:00 PM EST. Register here to join us for an hour!

 

Password Performance That Isn’t A Compromise

Posted by Grant Mongardi on Wed, Nov 04, 2015 @ 09:12 AM

Tags: Security, AD, cloud security, Password, SSO

So the question often arises of how can I have a secure password that I can remember and that meets the criteria of my policy? It comes up all of the time. Most systems place criteria on setting a new password to something like this:

  • must be at least 8 characters
  • must contain uppercase & lowercase characters
  • must contain a number
  • must contain a symbol

and often:

  • cannot contain the username
  • cannot contain consecutive duplicate character

Although there is some dissention as to whether or not all of these criteria are necessary, it certainly does help. It means you can't only have your dog's name or your daughter's birthdate as your password. But most people have a problem even creating passwords that meet the criteria, never mind remembering them.

I suggest thinking of the password differently. If you think of your password as a "pass-phrase" rather than a single word then you are much more likely to both remember it and to create one that is very secure. First you can think of a subject that you're connected with. For example let's say that you're a huge fan of computer games. Perhaps you might create a password like this:

It's a-me, Mari0! 

    or 

It's super effect1ve!

So those certainly meets our criteria, assuming your name isn't Mario. And hopefully you can remember it. What if you're a SciFi movie buff? How about these:

Han sh0t first.

I'll b3 back.

We're all standing n0w!

I'm afra1d, Dave.

I'm my 0wn best friend!

In any case, the idea is that you create passwords based upon a phrase that you can remember. You should utilize punctuation just as you would because that helps meet the requirement of special characters. And finally you replace a part or a character with a number or symbol until you've met the criteria required.  

Mind you, this isn't the perfect solution but it meets the criteria and is far more secure than an 8-character password that you can never remember (and then can't reuse). Using a password like this for 60 days far exceeds the security of a random string of 8 characters. A desktop PC utilizing a couple of GPUs can crack 3750 8-character passwords in the about the same time it takes to crack a single 10-character password. Add more characters and those numbers get even better. Most of the passwords above would take years to crack, if not decades.

Contact The Experts <theexperts@napc.com> for options to ensure better security with less work and better compliance across all of your services & websites! Single Sign-On and ZERO Sign-On for all of your corporate web services!

Five Tools To Protect Your Digital Assets Online

Posted by Grant Mongardi on Thu, Jul 11, 2013 @ 11:31 AM

Tags: digital asset management, digital asset protection, disaster recovery

Screen_Shot_2015-04-02_at_2.11.06_PM

"BEIJING: Cyber attacks that stole information from 141 targets in the US and other countries have been traced to a Chinese military unit in a drab office building in the outskirts of Shanghai, a US security firm alleged Tuesday." - Reuters

Google, Facebook, New York Times, U.S. Chamber of Commerce, Nortel Networks hacked. What chance do you stand?

If you can't trust your hardware, what do you trust? Information. Information is the key to both preventing and recovering from cyber attacks to your infrastructure. The right set of tools can be essential in protecting your data, digital assets, and your peace of mind.

1. Firewall - The first line of defense.

This is reasonably straightforward, however you need to be sure you're getting what you expect. Newer hardware from Cisco, Sonicwall, HP and Dell should be fine. ZTE not so much. Keep your hardware reasonably up-to-date to ensure the best security at the perimeter. Older, unpatched hardware is just open door.

2. Identity Management - a means of authentication and Identification. You need to know who is in your systems.

You need to maintain a centralized store of usernames and passwords. Islands of unmanaged identities is questionable if it is resides inside your firewall or even worse, on your DMZ. Ensuring that you are both recording login failures and password lockouts is also an essential part of prevention. If you have stores of unmanaged accounts that provide access to anything on your network you really need to make those go away. This is the achilles heel of any security-conscious company.

3. Authorization - You need to know who can do what.

You need to manage what levels of access every account in your organization has. This means that each role in your company should have an assigned set of requirements for infrastructure access, and that should determine exactly what their needs are for privilege requirements.

4. Auditing - you need to know what they are doing or what they did.

Log as much information as possible and review that information regularly. It's often the case that after the forensics on a hacked system that evidence of the compromise was there weeks or even months prior to the system actually being hacked. In fact we've found it's more the rule han the exception. Hackers are lazy, and typically will simply run automated scanning scripts on entire ranges of IP addresses looking for vulnerable systems. They often don't come back to the list of systems until they have some need later on. In many cases you can prevent a system compromise by simply being diligent in monitoring your systems.

5. IDS/IPS - Intrusion Detection and/or Prevention system.

"IDS" if you are unaware stands for Intrustion Detection System. These are typically network-resident systems that monitor network traffic and analyze it for potential nefarious conditions. Some of these systems rely simply on being able to promiscuously monitor all network packets, however some actually use client-installed detection systems that read directly from the machines in question. Using a combination of a well-designed IDS and IPS (Intrusion Prevention System) it's pretty much assured that you will prevent 99.9% of network/server compromises.

The part not discussed here is the likelihood of individual vulnerable systems either becoming compromised or becoming vectors for compromise. Some of this can be mitigated by the items above, however it's not silver bullet. The primary goal of the above is to prevent unauthorized access to your critical systems. Preventing access to your desktops, laptops and mobile devices is going to be a much more difficult job.

 

Deploy Macs Quickly and Simply With Centrify

Posted by Grant Mongardi on Wed, May 22, 2013 @ 01:56 PM

Tags: Centrify, Unix, Linux, IT, Macs, DirectControl, Windows

Mac

You're struggling with Mac deployments and wasting your valuable time fine-tuning the user experience on every Mac you release. You're sick of running around to desktops just to change minor settings like DNS, proxies, or background images. Who has the time?

Thanks to Centrify, you can deploy your Macs simply, quickly, and cheaply with a modicum of effort and the ability to easily customize the end user experience all with a single OSX image file, all from the comfort of your desktop. 



You can easily manipulate the users' look and feel based on the role of the machine, so a kiosk would look different from a laptop or a desktop. This all happens after deployment, meaning the look and feel changes all happen after the users logs into their respective Macs.

Watch the video for the full rundown of how Centrify will make your life a lot easier when deploying Macs. There's much more to learn about Centrify on our site! 

Learn More About Centrify

 

Bam! How Centrify Makes Mac IT Work Easier

Posted by Grant Mongardi on Tue, May 21, 2013 @ 02:58 PM

Tags: Centrify, Unix, Linux, IT, Macs, DirectControl, Windows

CentrifyLogo

Working in IT presents a variety of challenges, especially when you're on a Mac. Whether it's running on licenses because your Mac users never release them or needing to manage recordable devices because of oversight by some regulatory committee, Centrify can save you a lot of time and headaches. Just like that - bam!


 

Centrify has a very low cost desktop version that allows you to control rights on Apple computers. Part of that is the ability to easily roll out dozens of new machines with minimal work. It's a common need, and we've a way of doing it no one else has. Centrify's DirectControl for Mac allows for joining Macs to AD and applying REAL Microsoft policy using Microsoft's Policy Management MMC. Stop trying to pass off configuration management as policy, and then spending hours explaining it to your auditor.

That's just the tip of the iceberg with Centrify, where you can:

-Create accurate, robust and customizable reports on everything AD
-Deploy, Manage, control and Customize your Mac Desktops.
-Manage your Mobile devices and control your BYOD devices (bring your own device)
-Realize all of your Single-Sign-On (SSO) desires.
-Manage user-privilige on Windows, Linux and Unix systems
-Monitor, Record and Audit user activity on Windows, Linux and Unix.

So when you're working in IT, there's no need to get that sinking feeling that your Mac will give you more hurdles and obstacles than you have time for. Centrify can make management and control problems go away with a bam!

 

 

Learn More About Centrify

 

 

 

Out with the old...

Posted by Grant Mongardi on Wed, Jan 20, 2010 @ 08:55 AM

Often, Archiving of Digital Assets and other legacy digital files is an afterthought. Production just runs out of live working storage, and needs to make room, so they drag "a bunch of files" off to an Archiving location that frees the working storage space for use with new projects. There is often little thought put into the process, and the files are just forgetten - until you need them again. If the original plan is lacking, it's possible that you'll find the system lacking.

So what do you need to think about?

 Lots of things, it turns out.

  •  How long will you need to keep the data?
  • Will you need to keep all of it that long?
  • How much can I spend to do implement?
  • How much can I spend to maintain it?
  • How much time do I need to implement it?
  • How often will I need to do it?
  • How quickly do I need to restore it?
  • What regulations or standards are already in place?

Wow. And that's just the big items. There are also lots of little things to consider as well. I hope you have some time, because this could take awhile.

What's first?

Probably the first thing you should consider is whether there are any regulatory or standards bodies that will have jurisdiction over your Archiving process. Many organizations are subject to rules set forth by standards and regulatory practices that institute rules or guidelines that you are required to follow. If you plan your Archiving scheme beforehand, you should be able to meet nearly all of the rules and regulations that these organizations represent. Some things you may encounter in you're foray into Archiving might be:

  • Sarbaines-Oxley
  • ISO 9001/9002/10012
  • ANSI
  • FDA
  • Military/Government Regulations

All of these have at the very least "good practices" guidelines that they expect you to follow. I won't get into them here, but if you're subject to any of those organizations oversight, you should look into the requirements that they impose upon you. Often, they will offer you nearly everything you need to make a well-informed decision.

How long will you need to keep the data?

That's just the simple question. If you're like most of our customers, you have lots of different types of digital assets. In addition to actual WIP, supplied artwork, and purchased artwork, you probably have fonts, process documents (.doc, .xls, etc), templates, etc. If so, you may want to manage these separately, as the need for maintaining (or even Archiving them) may vary depending upon their type and your requirements for storage. For instance, fonts and process documents might only need to be kept for a 2 year period, where actual artwork might need to be stored indefinitely. If you choose to manage these separately, it will make the archiving process more complex, but will save you on storage requirements and tape costs in the long run. In addition, you may decide that for some of your storage needs, charging the customer to maintain their assets is an option. In the WIP case, this is difficult to do, however once you've gotten into Archival data, then it becomes a much more viable option. Archival data requires Real Estate for storage and man-hours to maintain and manage. Those are line items that are difficult to argue. It's not like WIP storage, where storing the data is necessary to produce a product. Once the data is gone from WIP, it becomes a logistics issue, where tapes need to be stored either Nearline, Offline, or Offsite. Nearline storage is the most intrusive and expensive. It requires using up a valuable tape slot, making it unavailable to use for Disaster Recovery or other Archives. Offline means that the tapes need to be stored in a vault onsite, with climate controls and a person to manage those tapes. Offsite means just that - buying storage offsite in a climate controlled vault.

What media options do I have for Archive storage?

This depends upon the prior question and your desire or need for longevity. Generally speaking, for most organizations the choice is limited to magnetic tape, DVD/CD media, or nearline disk. There are other options, which you're welcome to investigate, but these three are the most popular. There are also promising technologies in the works, such as holographic storage and optical tape, but those are at least 5 years away.

Over the years, magnetic tape seems to be the most popular choice for long-term Archival storage of digital assets, although some do use DVD/CD storage, and more organizations are asking about the potential of nearlining assets to disk storage. Some of this decision does depend upon your budgets and the amount of data that you are Archiving, but only to a lesser extent. Nearly any organization can afford to purchase at the very least a stand-alone tape drive for archival purposes, and then just manually write files to tape. If you're still writing Archives to DVD/CD manually, you should probably consider the possibility of adding a stand-alone tape drive. It will make your process much more efficient, and allow you to store Archives in a more supportable and reliable fashion.

Magnetic tape is the most popular choice among our clients, as most require a Disaster Recovery backup, and have already chosen to have an attached library for that purpose. This makes the choice for tape much simpler. In general, magnetic tape stored properly will last 15-20 years or more. Tape degradation is possible, and over extended periods of time data loss can occur, however assuming that Archival tape is only written once, and the tapes are then stored (once full) in the proper environment, you should be perfectly fine assuming that the data will be there when you need it. That also assumes you remember where you put it in 20 years. NAPC generally recommends that any Archival data is written to two separate tapes. This ensures that whatever happens to one copy of an Archive, you will still have a second option for retreival. Tape, being a magnetic media, is vulnerable to magnetic fields and humidity, and to a lesser extent humidity and temperature variations, so proper storage is essential to longevity. Probably not a good idea to store it on top of a large electric motor, or inside the Large Hadron Collider room.

Anectdotally, DVD/CD media appears to be more prone to data degradation than does tape (or disk for that matter). DVD/CD media is very sensitive to bright light and temperature variations, as well as air pollution and humidity. Also, the quality of manufacture seems to dramatically effect this media's longevity, as does the age _before_ writing to the media. You can purchase all different types of writable DVD/CD media, some even with a gold media layer. The manufacturers may tell you it lasts as long as tape, however our experiences, and the experiences of our customers seems to contradict that. Mind you, duplicate DVD/CD Archives, which we also recommend with Tape-borne Archives, can alleviate most of these concerns, as having a duplicate copy of your Archives ensures that the files would need to be damaged on both media before you would ever lose a file. It's very difficult to guage an effective life expectancy however, as we've seen DVD/CD media that is unusable after only a year, and then some that has lasted 10 years.

Nearlining Archives to disk has been gaining ground lately. The idea is that you present inexpensive, slower disk to your Xinet server, and move Archives to that disk, and then just running a tape backup or mirror/snapshots of that data for Disaster Recovery purposes. Given the relatively small expense of adding a large quantity of disk to and existing Enterprise storage device and the simplification it provides, this has been becoming more attractive to larger organizations, as it allows them to run Enterprise-level backups along with their existing backup strategies. For some organizations, this makes documenting and managing these Disaster Recovery schemes simpler for the purposes of explaining them to standards and regulatory inspectors. That alone can sway the powers-that-be to allocate monies for the expense. The scenario is typically more expensive than tape, however, and unless there are other more enticing rewards for doing so, it can be a less viable option. The reliability is generally the same as that for live data, in that only if the device fails in such a way that data on the disk is lost, will the data be lost. Most enterprise-level storage maintains the data it stores via various error-checking schemes, so the chance of data loss even over lifetimes greatly exceeding that of tape are slim. However most organizations would replace the underlying device(s) long before that. I hope.

So which should I chose?

That's not an answer I can offer. It definitely requires some study on your part, and depends largely on what sort of work you do, and what your actual needs are. You'll probably spend a lot of time in meetings discussing the actual needs of your customers and end-users, and what exactly their requirements are, as well as what your needs and resources are. If you need any assistance or guidance in how to best approach this, you should feel free to call your NAPC Account Representative, and they will be happy to help point you in the right direction.

Where can I get more information?

The Council on Library and Information Resources has some good resources on media:

CD Media

Magnetic Tape

And as always, feel free to contact NAPC for any help in defining your Archiving strategy.

Understanding Modern, Journalled Filesystems

Posted by Grant Mongardi on Fri, Nov 06, 2009 @ 11:54 AM

Tags: File systems, filesystems, DR, journalling, superblock, disaster recovery, metadata

    By understanding how filesystems work, you can hopefully gain a better understanding of how you are and are not at risk. Most modern filesystems used for enterprise storage are what is called "journalled" (or also "transactional"). In this article, I'll try to explain the behaviors of these filesystems in order that you might better be able to assess your risk, and therefore be better able to recover in the event that the unthinkable happens - your data goes away. Please note, that I in no way am implying that the behavior described in this article in any way is universal to all filesystems, only that it is my understanding about most modern-day filesystem that I work with.
    No filesystem is risk-free. There will always be the potential for dataloss, however most enterprise-level filesystems do a good job of preserving the integrity of your files. The various pieces all work together to ensure that your data is there when you need it, regardless of the circumstances. However, the geniuses (and I mean that) that design these systems can't make your files immune to the possibility that we should all know could happen. The idea is to understand that risk, and make the best choices we can to protect ourselves in the event that it does. You may go your entire career without ever encountering this, but better prepared then wanting.
    All hierarchical (folder tree) filesystems contain these two components:
        Metadata
        Data Store
This includes non-journalled filesystems as well as journalled filesystems.
   The Metadata generally includes the superblock and the inode, or file table, which is just an index of the file names, where each file starts being stored on the disk, and where in the hierarchy it resides. It may also contain other infomation such as size, integrity references (checksums), or perhaps storage type (binary or text) as well. This information is used for storage and retrieval of these files, as well as information describing particulars about how this particular filesystem is implemented.
   The DataStore is simply the empty place where the actual file data is stored on disk. Most filesystems break these into blocks of storage of a pre-determined, fixed size. Fixed-size blocks tend to make retreival and writing much faster, as there's much less math for the filesystem drivers to perform in order to find the place where the file is stored and where it ends. There are some exceptions, and these exceptions are typically very optimized for the way they do variable-sized blocks, so any performance hit you take for "doing the math", you more than make up for in having a filesystem optimized for the type of files you're storing on it.
   To retrieve a file, the filesystem looks at the metadata table for the file it's trying to retreive, and then determines where on the filesystem the file starts, and goes and reads that piece. In the process of reading a single file, the filesystem drivers may find that the file is not stored in one contiguous (one after the other) group of blocks. When this happens, it's called 'fragmentation'. No performance filesystem that I know of can totally avoid having some level of fragmentation. As filesystem begin to fill up, with regular file creations and deletions, the filesystem needs to become creative with how it stores these file. Although the ideal situation is to store each file in a contiguous way, it's not always possible without having to rearrange a lot of other allocated blocks in order to get a group of blocks large enough. Trying to do this would make the filesystem incredibly slow whenever you tried to write a file to it, as it would need to perform all sorts of reorginizations any time it wrote a file to big for the available contiguous blocks. Modern disk has improved dramatically in performing these sorts of non-contiguous block reads/writes (known as 'random seeks'), so when a filesystem has a reasonable amount of free space (14% or more), the performance should remain acceptable.
   Now to the hard part - writing files. In order to write a file, the filesystem first determines the size of the file being written, and then it tries to find a group of blocks right next to each other to store the entire thing. Storing files in groups of blocks all next to each other (contiguous), means that the mechanical heads reading the physical underlying disk don't have to move much to get to the next block. For obvious reasons, this is much faster and more efficient. If it does find one, it writes the Metadata, telling the Metadata where it's storing the file, then it writes the actual file data to the Data Store. In the event it cannot find a single group of blocks to store the whole file, it will find the optimum blocks in which to distribute the file across. Different filesystems use different means to describe 'optimum', so I won't get into that here. Suffice it to say, some filesystems are better at it than others. As a filesystem begins to get filled up, the drivers have a much more difficult time in storing files, having to break the file up more, and as such having to perform more calculations to get it all right.

What happens when Murphy's Law strikes?
   Filesystems breaking, or filesystem corruption as it's known, is most often caused by underlying hardware issues such as the physical disk dying or 'going away', or perhaps a power outage (some filesystems are more fragile than others, and can break through regular use, but that's not typical of true enterprise filesystems). If the disk isn't mounted, or there are no writes happening at the time of the failure, the filesystem is very likely to be unharmed, and a simple repair should get it all back to normal. However most enterprise filesystems are in a constant state of flux, with files getting written, deleted, moved or modified nearly all of the time, so having such a failure is often a reason for concern.
   When this failure occurs, any operation that was happening at the time is truncated to the point of the failure. If it was in the middle of writing the file, it means that the entire file not only didn't make it to disk, but parts of the file and file record are likely incomplete, and there's no warning of that other than the broken parts. In a non-journalled filesystem, this means that the damage may not be noticed until someone finds that file that is partially written. When they try to open it, there's any number of possible scenarios. The worst possible scenario is that the storage chain is broken in a really bad way. For instance, the file may start on block number 9434, and then jump from there to block 10471, and then to block 33455, and then back to block 8167. If it died before it had a chance to tell the filesystem all of that, then the filesystem might think it goes from 10471 to 2211, simply because that's the value that was stored for that block prior (or perhaps it's just random data that was there to start with). If 2211 is somewhere within the superblock for your filesystem, and someone tries to resave that file, you've just completely broken your entire filesystem. Oooh, that hurts.
   This is where the jounalling comes in. With a jounalled filesystem, every operation is written down ahead of time in the journal, then the operation is performed, then it's deleted from the journal. In the event of a failure such as described in the last paragraph, when the filesystem comes back up, it will see that there are operations in the journal, and can "roll-back" those operations as if they never happened, hence the filesystem returns to it's last known good state. This ensure that each modification to the filesystem is 'atomic', or only happens when everything is absolutely complete. Even though some of those operations have been completed, the journal is housing all of the not yet complete operations currently being carried out, so when it returns it can just see what was being done and return to what it was before the partially complete operation. Although this isn't totally fool-proof, it certainly is miles ahead of the alternative. This reduces the possibility of a complete data loss failure only to events in which the journal is being written and becomes corrupted, and when the filesystem is also being written and becomes corrupted, and when the Metadata is also being written and becomes corrupted. The likelihood of all three of these things happening is reasonably unlikely where all three things get broken.

I live in a "reasonably unlikely" world. What do I do?
   Regardless of whether you're a lucky person or not, you should always have a good disaster recovery plan. Not only should you have one, you should schedule regular tests of that plan, or at the very least audit it. NAPC has years and years of experience in this area, and can discuss with you the risks associated with your particular configuration, as well as all of the possible scenarios for disaster recovery in the event of a catastrophic failure. To setup such a discussion, contact your Sales representative, or just send us an email!