Understanding Modern, Journalled Filesystems

Posted by Grant Mongardi on Fri, Nov 06, 2009 @ 11:54 AM

Tags: File systems, filesystems, DR, journalling, superblock, disaster recovery, metadata

    By understanding how filesystems work, you can hopefully gain a better understanding of how you are and are not at risk. Most modern filesystems used for enterprise storage are what is called "journalled" (or also "transactional"). In this article, I'll try to explain the behaviors of these filesystems in order that you might better be able to assess your risk, and therefore be better able to recover in the event that the unthinkable happens - your data goes away. Please note, that I in no way am implying that the behavior described in this article in any way is universal to all filesystems, only that it is my understanding about most modern-day filesystem that I work with.
    No filesystem is risk-free. There will always be the potential for dataloss, however most enterprise-level filesystems do a good job of preserving the integrity of your files. The various pieces all work together to ensure that your data is there when you need it, regardless of the circumstances. However, the geniuses (and I mean that) that design these systems can't make your files immune to the possibility that we should all know could happen. The idea is to understand that risk, and make the best choices we can to protect ourselves in the event that it does. You may go your entire career without ever encountering this, but better prepared then wanting.
    All hierarchical (folder tree) filesystems contain these two components:
        Metadata
        Data Store
This includes non-journalled filesystems as well as journalled filesystems.
   The Metadata generally includes the superblock and the inode, or file table, which is just an index of the file names, where each file starts being stored on the disk, and where in the hierarchy it resides. It may also contain other infomation such as size, integrity references (checksums), or perhaps storage type (binary or text) as well. This information is used for storage and retrieval of these files, as well as information describing particulars about how this particular filesystem is implemented.
   The DataStore is simply the empty place where the actual file data is stored on disk. Most filesystems break these into blocks of storage of a pre-determined, fixed size. Fixed-size blocks tend to make retreival and writing much faster, as there's much less math for the filesystem drivers to perform in order to find the place where the file is stored and where it ends. There are some exceptions, and these exceptions are typically very optimized for the way they do variable-sized blocks, so any performance hit you take for "doing the math", you more than make up for in having a filesystem optimized for the type of files you're storing on it.
   To retrieve a file, the filesystem looks at the metadata table for the file it's trying to retreive, and then determines where on the filesystem the file starts, and goes and reads that piece. In the process of reading a single file, the filesystem drivers may find that the file is not stored in one contiguous (one after the other) group of blocks. When this happens, it's called 'fragmentation'. No performance filesystem that I know of can totally avoid having some level of fragmentation. As filesystem begin to fill up, with regular file creations and deletions, the filesystem needs to become creative with how it stores these file. Although the ideal situation is to store each file in a contiguous way, it's not always possible without having to rearrange a lot of other allocated blocks in order to get a group of blocks large enough. Trying to do this would make the filesystem incredibly slow whenever you tried to write a file to it, as it would need to perform all sorts of reorginizations any time it wrote a file to big for the available contiguous blocks. Modern disk has improved dramatically in performing these sorts of non-contiguous block reads/writes (known as 'random seeks'), so when a filesystem has a reasonable amount of free space (14% or more), the performance should remain acceptable.
   Now to the hard part - writing files. In order to write a file, the filesystem first determines the size of the file being written, and then it tries to find a group of blocks right next to each other to store the entire thing. Storing files in groups of blocks all next to each other (contiguous), means that the mechanical heads reading the physical underlying disk don't have to move much to get to the next block. For obvious reasons, this is much faster and more efficient. If it does find one, it writes the Metadata, telling the Metadata where it's storing the file, then it writes the actual file data to the Data Store. In the event it cannot find a single group of blocks to store the whole file, it will find the optimum blocks in which to distribute the file across. Different filesystems use different means to describe 'optimum', so I won't get into that here. Suffice it to say, some filesystems are better at it than others. As a filesystem begins to get filled up, the drivers have a much more difficult time in storing files, having to break the file up more, and as such having to perform more calculations to get it all right.

What happens when Murphy's Law strikes?
   Filesystems breaking, or filesystem corruption as it's known, is most often caused by underlying hardware issues such as the physical disk dying or 'going away', or perhaps a power outage (some filesystems are more fragile than others, and can break through regular use, but that's not typical of true enterprise filesystems). If the disk isn't mounted, or there are no writes happening at the time of the failure, the filesystem is very likely to be unharmed, and a simple repair should get it all back to normal. However most enterprise filesystems are in a constant state of flux, with files getting written, deleted, moved or modified nearly all of the time, so having such a failure is often a reason for concern.
   When this failure occurs, any operation that was happening at the time is truncated to the point of the failure. If it was in the middle of writing the file, it means that the entire file not only didn't make it to disk, but parts of the file and file record are likely incomplete, and there's no warning of that other than the broken parts. In a non-journalled filesystem, this means that the damage may not be noticed until someone finds that file that is partially written. When they try to open it, there's any number of possible scenarios. The worst possible scenario is that the storage chain is broken in a really bad way. For instance, the file may start on block number 9434, and then jump from there to block 10471, and then to block 33455, and then back to block 8167. If it died before it had a chance to tell the filesystem all of that, then the filesystem might think it goes from 10471 to 2211, simply because that's the value that was stored for that block prior (or perhaps it's just random data that was there to start with). If 2211 is somewhere within the superblock for your filesystem, and someone tries to resave that file, you've just completely broken your entire filesystem. Oooh, that hurts.
   This is where the jounalling comes in. With a jounalled filesystem, every operation is written down ahead of time in the journal, then the operation is performed, then it's deleted from the journal. In the event of a failure such as described in the last paragraph, when the filesystem comes back up, it will see that there are operations in the journal, and can "roll-back" those operations as if they never happened, hence the filesystem returns to it's last known good state. This ensure that each modification to the filesystem is 'atomic', or only happens when everything is absolutely complete. Even though some of those operations have been completed, the journal is housing all of the not yet complete operations currently being carried out, so when it returns it can just see what was being done and return to what it was before the partially complete operation. Although this isn't totally fool-proof, it certainly is miles ahead of the alternative. This reduces the possibility of a complete data loss failure only to events in which the journal is being written and becomes corrupted, and when the filesystem is also being written and becomes corrupted, and when the Metadata is also being written and becomes corrupted. The likelihood of all three of these things happening is reasonably unlikely where all three things get broken.

I live in a "reasonably unlikely" world. What do I do?
   Regardless of whether you're a lucky person or not, you should always have a good disaster recovery plan. Not only should you have one, you should schedule regular tests of that plan, or at the very least audit it. NAPC has years and years of experience in this area, and can discuss with you the risks associated with your particular configuration, as well as all of the possible scenarios for disaster recovery in the event of a catastrophic failure. To setup such a discussion, contact your Sales representative, or just send us an email!

Staying Current on Support

Posted by Robert Sullivan on Tue, Oct 13, 2009 @ 11:28 AM

Tags: support, Xinet, XMP, Adobe, metadata


I ride dirt bikes and recently incurred a 700 dollar repair bill on my sons bike.
What? What was the cause of this? It's a two stroke motocross and the engine had
seized up, which is not all that unusual, but we had done a new top-end not all that
long ago. So why had it blown again so early?

The 'real mechanic' I brought it to explained that pump gas has changed drastically
from what the engine was designed to run on. But the bike is only four years old..!
Pump gas these days is going Greener with at lease 10% ethanol or more, and is so
oxygenated that it burns hotter, expanding the rings tighter around the piston...
you get the idea. Ring-ding-a-ding, goes to bwop-bwooooop... silence.

As back yard mechanics working on dirt bike engines we hadn't done anything wrong with
our top-end. The gas we run had changed and we didn't ever realize the consequences.

I had an issue last week that drove home the importance of staying current
with system wide support. My customer was having difficulty with XMP metadata
that was not showing up in Venture. This was for XMP fields that were working
correctly not that long ago. Venture syncs were also not showing the field values
either. The first instinctive question is "what changed?" And the answer is
"we're not doing anything differently." Like me with my dirt bike.

Xinet engineering had me activate fpod vlog per a specific tech note which will
create a special output file. Copy on a questionable file again and watch for it to
show up in the log. Verify that the XMP data is not showing in the browser and then
run the syncxmp command in debug mode to capture what the sync is actually doing,
or having issues with. Verify if the data still doesn't show up and send the logs and
sample file in to them for analysis.
They came back with a new syncxmp binary file to slip in, and this was the resolution
to the problem.

I inquired with Xinet engineering about the 'why' and 'how' questions having to do with
the metadata not showing up in Venture. Xinet replied that from their perspective,
the issue had to do with a non-compatible file: specifically,  the jpeg image that I
had sent, which caused syncxmp to fail with these errors:

syncxmp(87535) malloc: *** error for object 0x512060: Non-aligned pointer being freed (2)
syncxmp(87535) malloc: *** error for object 0x512390: double free

Now this error is gobbly-gouk to me. I would have thought a pointer being freed was
a good thing, but I'm not a code writing engineer for several reasons, which is why I
have a very defined escalation path.

The way they "fixed" this was to test my sample file against a newer build of syncxmp,
from the new Suite 16 code, which incorporates some newer XMP libraries provided by Adobe.
In short, the newer Adobe libraries resolved the issue.

So as far as my customer was concerned, they were "not doing anything differently"
But apparently Adobe was. The problem came from the fact that Adobe doesn't stand
still. Ever! They continue to evolve and improve their XMP libraries and those
changes were not recognized by the Xinet version my customer was running.
This is the intrinsic value of having support. We were able to update to a newer
binary to stay current with the ever changing world.

So even though you may not be doing anything different... Change Happens!
My dirt bike solution is to run race gas. The world keeps changing around us without our
consent or input and will not wait for us to adapt or catch up. The leading edge is really
not all that far ahead. But by falling behind, the distance becomes huge and costly.
So stay current with good support.
And if you ride dirt bikes, check your gas.

-Sully

Xinet Suite-16 Has A Sweet Interface

Posted by Robert Sullivan on Fri, Jul 17, 2009 @ 11:33 AM

Tags: GUI, Xinet, XMP Panels, NAPC

We've been working with Xinet's newest upcoming version release, Suite 16,
and I'm telling you, it is sweet! Aside from the video excitement that Brian talked
about, there are a lot of cool developments in Suite 16.

All of the administration tools from FullPress and WebNative Venture are all in one
web based GUI. The old Java GUI for FullPress will still be available in Suite 16,
but I don't think it will be developed beyond that. Once you get use to the new web
base admin, you'll toss that old interface like yesterdays newspaper anyway.

There are three control bars in the new interface and the top row has six control
categories. You can see in the two screen captures below that as I change the top
row category, the next two rows will change and give you access to all of the pertinent
settings for that selection. It's very well thought out were simple things like the
Volumes/User category also gives you the Venture Permission settings to assign.
 
 

As an administrator I think the "Logging" category is the big boom, no-brainer.
All of the logs that were scattered about before are now under one button. Xinet has
also added some more views to information that users are always asking about.
'Preview Generation' will now show you what is in the pipe line as far as how many files
are being processed. So if a user dumps 25 movie files and 71 images into the system,
you can see the numbers of what is processing, waiting or holding. Helping to take the
guess work out of, what is my system working on.

There's a nice feature for importing any existing custom XMP panels you may have also.
What was a manual process of adding one field at a time is now a batch in the interface.
If you have custom XMP panels within your Adobe programs you can now import them into
the Venture database very easily. Drop your panel in and select which data fields you'd
like in Venture, create it's own Data Field Set on the fly, and even determine which
fields you'd like to be XMP writable. Submit and it's done. All of those custom data
fields are now available to put into your data templates.

There's a lot more in here too. We'll be talking a lot about new features and I'm sure
NAPC will be holding a Webinar or two as the release date draws near. I think when you
see it, you'll agree with me, Xinet Suite 16, is pretty sweet!

-Sully
  

New Video featues in Xinet v16

Posted by Brian Dolan on Fri, Jul 03, 2009 @ 08:29 AM

Tags: video, reel, Xinet, how to, Xinet WebNative Portal, DAM Systems, Portal, NAPC, NAPC blog

As the Holiday weekend starts up for us all, lets close the week out and chat a bit about all the cool things that are coming to the masses soon.  Xinet is going to be releasing version 16 of it’s suite of tools including a new, faster version of Portal, a unified web interface for all administration, easier tools and setup for PDF Image replacement, greatly enhanced video capabilities, basic web based markup and annotation tools and a whole bunch of other “under the hood” improvements.  Currently, NAPC is testing beta 2 of version 16 and we’re all pretty impressed with it so far.  One of the biggest features I’m excited about is the enhanced video features.  Let me esplain (as Ricky would say).


Xinet, in the new soon to be released version of video in Suite 16, has greatly enhanced how users in Portal interact with video assets.  In the current release of Video 2.0 in Xinet, it is possible to stream many video formats, create keyframes at a preset interval, and really, thats about it.  With the new version, you’ll be able to do much, much more.  First and foremost, the ability to create what I would call mini-reels, is now available as a basket plugin in Portal.  This is how it works:


1)    User logs in to a Portal site and identifies the files they want to work with.  Those files could be video files of various formats, InDesign files, static picture files, just about anything you can have in Xinet.
2)    The user would then add those files to a shopping basket.
3)    Once in the basket, the user would click the basket plugin named “Video Generation”
4)    This brings up a new Web 2.0 type of interface to arrange the assets into whatever order makes sense to the end user.  Asset arrangement is made simple by using drag and drop in a web browser-me likey!
5)    Once in the correct order, the user can set the ‘in and out’ times of the files based on keyframes generated by Xinet or by hours:minutes:seconds.
6)    The user can also set basic fade outs from clip to clip as well.  Gives it a nice touch!
7)    Once the files are arranged in the correct order and the in/out times are set, a new video file can be generated from those assets in either a Quicktime, Windows Media, or Flash format.
8)    The server then generates the appropriate file on the Xinet file system and once done, it gives the end user the ability to download the file to their desktop.

Here's a peek of what it'll look like:

 

This is huge everyone.  Think of it this way, if you have 30 second spots for a client for all of 2008, and they want to create a quick reel of all the ones that won awards (that you made of course!), they can quickly log in to Xinet via Portal, collect the assets, set the times and format and let Xinet make the file for them.  To be clear, this is not intended for broadcast but more for the web or computer screen aka small screen.  I think this is a huge leap forward for Xinet and since I used to work in the broadcast world, it’s pretty exciting for me as you might be able to tell!

On top of that, screen detection for keyframing is also part of the new release.  The current version can be set to sample a keyframe at a set interval say every 5 seconds or so regardless of scene change or not.  That can potentially add a bunch of useless keyframes into your database.  With the new scene detection functionality, you can set the admin preferences so it is “smart” and only creates keyframes when a scene actually changes with tolerance controls.  So, instead of keyframing a movie that is 1 minute long and getting 12 keyframes (when sampled every 5 seconds), you may only get 7 or 8 frames stored in the database. This can be very helpful!

Overall, we have a lot to look forward to with the upcoming release of version 16 of Xinet’s Suite of tools.

Enjoy the weekend all and as always, if you have any questions on any of this information, please give us a ring and we’ll be happy to help!  Want to see this new functionality for yourself???  Give your Account Manager a call and we’ll be happy to show you all the new stuff.

Happy 4th of July!

Brian Dolan

10Gb networking and DAM

Posted by Rob Pelmas on Wed, Jun 03, 2009 @ 09:56 AM

Tags: knowledge, how to, DAM Systems, Portal, workflow

We're a bunch of performance geeks here. We've been tweaking blocksizes, stripe, and interleave settings on disk since SGI first gave you access to 'em. Tuning and re-tuning SWAP size, location, type is in our blood. A few percentage points here, double digit gains there, all without more capex. Gotta love it.

Now, anytime a paradigm shift in technology comes out there's a steep cost differential to it, right? 10Gb networking had only a tiny little blip of time when it was out of reach of the masses, which is a refreshing change. You can kit out most servers with a card, an acceptable managed switch with a 10Gb port or two, for a very reasonably cost.

Why go to 10? Our desktops have had Gb cards for what seems like forever, and very fast CPUs. With just a couple 'power' users you could swamp the networking capabilities of a server. Of course, a handful of years ago disks could only cough up 150Mb/sec or so of sustained data, so network tended to not be the gating factor in server  performance. Modern disk starts at well over 300Mb/sec, and if you stripe or otherwise use some common sense design principles you can achieve multiples of that.

 Xinet and NAPC both use the 1 to 6 rule for users and performance: with 6 retouchers (or 'power' users), you can assume 1 of them will be accessing the server at one time. 12=2, 18=3. It's a rough rule of thumb, but one that seems to stand up over time. 12 heavy hitters can thus drain 120Mb/sec out of a server, which is the better part of 2 1Gb cards bonded together. Add in the other users, doing layout, OPI printing (yep, some folks still use an OPI workflow), and Portal access, you've got a saturated pipe. 10gb gives you a good 800mb/sec of access speed, which will sate all but the most demanding organizations needs for data.

Next of course, we can talk about teaming 10Gb interfaces! (insert evil chortle of delight here).

 


Xinet Automation in the Command Line

Posted by Robert Sullivan on Wed, Apr 15, 2009 @ 09:52 AM

Tags: database, Xinet, how to, workflow

Some people actually prefer to work from the command line in a terminal window.
I know! Can Ya' believe it!
There are somethings that just aren't in the GUI and there are some things you
can just do faster in a terminal window. That is assuming you can type faster
than me.

I had a client call in looking for a way to create a series of "Move Actions"
but wanted to do it as a batch somehow. I couldn't get him a 'batch create' for a
WebNative Venture action, but... command line creation fit the bill.

The idea was for remote users to have an Uploader Application and one of the
mandatory data fields was the client name. The Trigger set would do a 'Compare Action'
and then call a Move to-client-name, within the folder structure. So it's going to be
an automated filing system with Triggers for uploaded assets.

In the folder path: /usr/etc/webnative/actions/
are all of the default Venture action categories, (copy, email, move...) and any
custom action categories we can create ourselves. Within each specific category folder
are the scripts for that action and a settings folder that has a config file for each
action we create.

Here is a sample move action called: "Acme Widgets Incoming"

If I look at that setting from the command line it looks like this:
# cat "Acme Widgets Incoming"
[base]
desc=File asset in client folder Acme
[arg0]
value=/Volumes/Raid/Studio/Acme_Widgets/Incoming
[arg1]
value=O
#
It's pretty simple once you compare the GUI to the code settings.
desc is the description of the action
arg0, value, is the destination path we want to move the asset into.
arg1, value=O, is the 'Overwrite' selection.
This value could be 'A' for 'Append Unique Number' or 'F' for Fail
You just need to understand the argument options and the code translation. Now obviously
the different action categories will have other argument options, you just need to review
each one to know the 'value' to assign.

So back to my client's need to create multiple actions quickly. Consistency in the folder
structure and nomenclature is part of the key. All we do is create one 'master' move action
in the WebNative browser window. "Acme Widgets Incoming"
From the command line copy the master to the next client name and then edit the value path.
Done.

In the terminal window navigate to the action settings folder.
   cd /usr/etc/webnative/actions/move/settings
Copy paste the master setting to the new client name.
   cp "Acme Widgets Incoming" "Spacely Sprockets Incoming"
Edit the new setting "Spacely Sprockets Incoming" for the correct path-to-folder.
     value=/Volumes/Raid/Studio/Acme_Widgets/Incoming
becomes:
     value=/Volumes/Raid/Studio/Spacely_Sprockets/Incoming
 
If you need to create a lot of actions that do the same thing to unique paths, this is the
quickest way to go. The command line is not for everybody but for those that know their way
around, its another powerful way to work with the Xinet Venture database. My client now has over
300 move actions, to file uploaded assets into the correct client folder automatically.
All done by lunch time..!
So what's for dessert?

-Sully 

So, how can Xinet help me do. . . . .?

Posted by Brian Dolan on Wed, Apr 01, 2009 @ 11:27 AM

Tags: Xinet, Elegant, knowledge, Dalim, Dialogue, Creative Banks, nTransit, how to

Being in the position that I’m in affords me the opportunity to travel and see many of our clients around the country.  I love working with all of you and you all have your own unique personalities as well as challenges in your respective environments.  Something I want to share with you all is a question I’m often asked, “How can I make Xinet do more for us?  I know it can do X, Y or Z but I only use it for (fill in the blank) and want to do more with it!”  Usually the next questions is, “How do your other clients do it and how are they using Xinet?”  Well, let’s start with the second one. . .how DO other clients work with Xinet.  This is a hard one since all of you use it for different reasons and have unique needs even though many of you are in the same business. 

So, how do I make it do the trick your asking?  Get to it already would ya!

A couple of things to think about before I can suggest anything:

1) Listen to your clients (internal and external)- You know your company and how it ticks better than us.  Yes, we at NAPC all have been around the industry for a long time and bring plenty of knowledge to the table but. . .you're the one hearing the conversations in meetings or through the hallways with questions like, "How can we share out assets to client X but not allow them to do or see Y", or "How can I automate the process of creating multiple PDF's from one print command?", or "How can I customize the interface so it looks like my clients brand?" or "Can I do . . ."  You get the point right???. . .if not, the point is you know your world better than we do so listen to your user community and then start thinking about how to solve their challenges with the tool set at hand. And if the current tool set doesn't accomplish the needed task, then there is most likely a plugin or a solution to make it do the trick like Creative Banks or Elegant or nTransit.  Xinet is an open product so customizations can be done and probably already have been.  Ask us-we'll be happy to help.

2) Be creative yourself- it's easiest to hear from someone that says yeah, we did this thing and it really rocks or ask us to replicate something that was done before but think about how to do "it" yourself.  You may be in IT or work in some IT capacity but that doesn't mean you're not creative!  You are-you just have to find the time to sit down and think about it. I know, I know, easier said than done as we're all super busy but if you want to really do something, you'll make the time. 

3) Is my idea even possible? Look, all technologies have their limits so if you want Xinet to make your coffee (not too strong of course!) AND fold your laundry, you might be pushing it.  So, this is a good time to ask NAPC as well as look at the Xinet manuals.  Seriously, look at the manuals.  I know a lot of you depend on NAPC for the knowledge to be handed to you and we don't mind that at all.  That's what we're here for!  Although, you might be better served by reading up on the technology you manage.  Right!?!?!?!?!?  You've all heard RTFM or to be politically correct I should say RTM but whatever, you get the point.  I've been to plenty of training classes, received lots of great advice from others in and out of the industry but my best resource to date has been the manuals.  Read up everyone!

So really, those ARE my suggestions.  Listen to your clients, be creative yourself in coming up with ways to solve the challenge and do your research by speaking with us and reading up on the Xinet manuals.  Seriously, you all will be waaaay better off in the end if you put the time into it.  Again, I know you're all busy but this is important stuff here right?!?!  Just like working out, which I do all the time! :), it takes dedication and Xinet is no different.

Also, after working the three steps above first, you'll be able to better answer your first question yourself,  "How can I make Xinet do more for us".  And if not, again, we're here to help with suggestions and industry knowledge to get you to where you need to be.

Bottom line, be proactive with learning this stuff . . .it'll really help you in the long run.

Oh yeah, one more thing on this subject, work closely with the people that have a hand in Xinet.  If you're more on the creative side of things, create the relationship needed to work together harmoniously with your IT staff.  Contrary to popular belief, they are truly there to help you, not hold you back even though sometimes it may seem that way.  And, if you're in IT, be open about this stuff and the ideas that may come your way.  Don't start with "No", think about it and be creative in helping solve the problem or challenge at hand by working closely with your clients.  Can I get a "Kum ba ya!"

Any way. . .two quick things before I get off my high horse. . .

1) Dialogue ES is around the corner.  If you're familiar with how Dalim's Dialogue currently works then you'll probably be happy to hear how it's evolving.  This week Dalim is releasing the internal beta so I'll get my hands on it and write another blog entry just on that subject later but some quick things to mention:
The interface has changed quite a bit.- this is good stuff guys and gals. . .totally revamped and much more slick.  Again, more to come later.  As far as functionality, it's totally rewritten from the ground up and now has a database behind it.  This can open up all kinds of possibilities, think about it.  Linear versus non-linear workflows.  So now, instead of having user a, b, then c approve or reject a document, it can be more of fluid approval process and not so much in a linear fashion as it is now.  Also, once a user approves or rejects the document, that action doesn't have to stop the process as it does now in a multi-user approval process.  In other words, if user a rejects the document, user b or c can still approve or reject it themselves versus how it is now.  In the current version, if a user in an approval workflow rejects the document, it's done, that's it.  No one else can approve or reject it using the built in approval tools.  That changes with ES.  New icons for statuses, new list views to easily see all users status, new interface, etc.  Lots more to come there.

Any way, thanks for taking the time to read our blog and I hope you all stay tuned for more from NAPC.  We're dedicated to making you successful!

Brian

For more info go to www.napc.com

Push the Xinet Envelope

Posted by Robert Sullivan on Mon, Mar 23, 2009 @ 10:08 PM

Tags: database, knowledge, webinar, workflow

So here I go a blogging away. I'm not too sure of how to start because usually I'm following an agenda of some kind, or a question to start with. Someone has a problem or a challenging workflow to configure. Blogging is more free flowing, I guess... We'll see.
 
I do a lot of training for new clients and there is so much information to absorb that they can easily become overwhelmed & overloaded. As they get use to the system they'll Venture further (pun intended) and start looking for cleaner ways to use their new system. But not many will make that leap and really push the limits of what they can get out of it early on. There's way more power under the hood.
 
The problem I find so often is that people have an idea of what the server can do but aren't sure how to get there. And that stops them. At that point, the thing that most often drives them is a client request, or the boss (every one has a boss...) saw a Webinar about some cool widget and wants it created. Is it done yet?
When I was in Printing, I worked for a guy that always said,
"If I wanted it tomorrow, I'd be asking for it then. I want it now!"
 
Then the calls start coming in in earnest. Which is great for me because now I have something to dig into. A new challenge. But I wonder how do we get people to push the envelope before they get the push themselves? It's training, and it's knowledge. NAPC runs Webinars' all the time on different applications, from FullPress, the Venture database, to Dalim Dialogue or the Xinet Uploader. Sharing the knowledge is driving the train here!

Do you have an idea for a Webinar you'd like to see. Tell us. Got an idea for any Trigger automations, let me know. If you can conceive of it, we can build it... well, I'm looking for the stuff we can build out-of-the-box. Custom stuff comes later.

-Sully