Migrating from Stellent UCM & IBPM – A little foresight can alleviate a lot of trouble

Migrations from systems like IBPM to ILINX can be fraught with issues that can bite the unwary in very bad places. However, if you are aware of such problems, you can plan ways to mitigate them and have a successful migration in the end.

One issue we run into is documents that have a page or two with corrupt images. Perhaps when the page was first contributed to IBPM, a system or other type of issue caused the image to be corrupt or cease to exist. Either physical hardware or a software bug can be the culprit. The product we use for migration, ILINX Export, will flag this document as an error, skip it and move on to the next document in RECID order. Once the export is completed, these flagged documents have to be re-visited. Once a determination is made that an image is indeed corrupt, and the chance to recover it from backups is extremely remote, the document can be deleted or manually exported from IBPM without the corrupt image.

Another matter we’ve dealt with is related to non-tiff images. This category is “universal” type images, and includes PDF, DOC, XLS, MSG and a host of other file types that IBPM supports. There are options within the ILINX Export tool that will allow the export of these files types in their native format through the IBPM SDK. Or the export can be done through database manipulation that can directly access the image file and then “unzip” the universal file into its native format.

The issue that can be encountered here is twofold, and manifests itself when migrating to another repository. One, IBPM stores the native file zipped up with another file that contains metadata and has no file extension. When the document is unzipped there are two files, one with a valid file type and one without. Typically, backend repositories require file extensions, which are useful for performance, like displaying file type icon on the user interface, and a variety of other reasons. During the migration, importing to the backend may be impeded due to a lack of extensions on the metadata files. Secondly, if the extension of the universal file has been altered or damaged in storage, the file type may not be a standard that the new repository will accept. In any case, having your migration come to a screeching halt is something to avoid.

Awareness is the key. By proactively incorporating a response into your migration plan, you can eliminate much heartburn and anxiety. That is where the expertise and knowledge of a seasoned Optika / Stellent / Oracle integrator, like ImageSource, comes into play. We have helped many customers build migration plans that take these and other items into account, so the migrations are as smooth and worry-free as possible.

Transferring ILINX Release Configurations When Upgrading

Starting with ILINX Capture v6, the Release configurations are stored within the ILINX database. In ILINX Capture v5x, the ILINX Release configurations were stored in XML files on a disk. ILINX Capture called ILINX Release using a SendAndReceivedReply IXM. The change to store the settings within the ILINX database is very useful for a number of reasons: Release settings are part of the batch profile allowing for simpler migrations between environments, Release is much easier to configure, all configurations are in the database, etc. However, this change can create some extra work when upgrading from ILINX Capture 5x to ILINX Capture 6x. Because of the different architecture, ILINX Release needs to be completely reconfigured for the existing batch profiles. In addition, the Release XML doesn’t change, but there is a shortcut that can be taken. After you have upgraded ILINX Capture to v6, you’ll notice a new IXM in the palette: ILINX_Release_IXM_Icon

The existing ILINX workflow will likely have a SendAndReceiveReply IXM on the map that the 5x version of ILINX Capture used to call ILINX Release. Most likely, it would look like this:
SendAndReceiveReply_IXMTo configure ILINX Release for ILINX Capture 6x, the SendAndReceiveReply IXM will need to be removed from the map and a Release IXM must be dragged onto the workflow map in its place. Once the new Release IXM is on the map, it will need to be configured. This is where the shortcut can be taken. Instead of having to manually enter in the correct URLs, map the metadata values, and configure any other settings, do this:
Configure and save Release with some place holder settings: I normally leave the settings at default and enter in the bare minimum:

  • Job Name
  • User Name
  • Password
  • Batch Profile
  • Release Directory

Once ILINX Release configuration is saved and the workflow map is published, there will be a new entry in the ILINX Capture database Capture WorkflowAppSettings table. The CaptureWorkflowAppSettings.SettingsXML column is where the Release configuration is stored. Now it’s time to update the SettingsXML column with the XML from the ILINX Release 5x job settings file. The Release job should be on the ILINX Release 5.x server at c:\ProgramData\ImageSource\ILINX\Release\Settings\Jobs. The only caveat here is to be sure to place single quotes around the XML content. Here is what the SQL update statement would look like:

update [ILINX CAPTURE DATABASE].[dbo]. [CaptureWorkflowAppSettings]
set SettingsXml = ‘COPY AND PASTE ALL TEXT FROM 5.4 OR PRIOR RELEASE JOB SETTINGS FILE HERE’
where settingsID = ‘APPROIATE ID HERE’

Following this procedure can save some time if upgrading an ILINX Capture 5x system that has a lot of batch profiles. A lot of the time spent on the upgrade could be in the ILINX Release configuration. If I was upgrading a system with only a few batch profiles, I would probably just reconfigure them. If I was upgrading a system with a lot of batch profiles, I would go through the above steps to save some time.

John Linehan
Sr. Systems Engineer
ImageSource, Inc.

Failover Cluster Troubleshooting

There’s nothing quite like logging in to a customer’s system first thing Monday morning only to be greeted with this:

Cluster_report

I discovered this when I wasn’t able to log into the customer’s ILINX Capture implementation. The logged error (failure to locate the SQL Server) led me to take a look at the SQL Server’s configuration to confirm that its service was not running on either node of the cluster, and the error I got when trying to start that (a clustered resource could not be activated) led me to check on the clustered resources themselves.
Continue reading

Enabling Full-Text Search in ILINX

I recently enabled full-text on an ILINX system and thought it would be a good idea to share the procedure here. ILINX leverages the MSSQL full-text capabilities so the process is mainly a matter of making sure everything is setup properly on the database side. Here are the steps I followed.

1.     Confirm Full-Text is installed and enabled on the SQL server

First I had to determine if Full-Text was installed on the SQL server. To do this I executed the following query:

select fulltextserviceproperty(‘isfulltextinstalled’)

If the query returns a ‘1’, full-text is installed on the server.

Next, I needed to confirm that full-text is enabled for the ILINX Content Store database. To do this I executed the following query against the ILINX Content Store database:

Continue reading

Exporting Records Using KTM Medical Claims Add-on & Kofax Export Connector

Kofax provides an add-on pack for their Transformation Modules product to handle CMS-1500 (formerly HCFA) and CMS-1450 (alternatively, UB-04) medical claims forms called the KTM Medical Claims Add-on package. Related to this package is the Kofax Export Connector for Medical Claims. This Export Connector allows for the data captured off of said forms to be formatted in the HIPAA-compliant EDI/ASC X12 837 formats (to be exact, 837 Professional for the CMS1500 and 837 Institutional for the UB04).

Continue reading

When handwriting is your only option…. Peter Lang

When researching Enterprise Content Management capture projects, the question of handwriting recognition comes up again and again — and many people aren’t sure what to expect.  More commonly, their expectations are unrealistic. They think there is no hope at all, ever. On the other end of the spectrum, some think that tiny fevered cursive scribblings from a rushed meeting can be scanned (or even faxed) and read with accuracy. In helping people think about their forms and the viability of capturing handwriting, I have a few simple guidelines to consider which seem to apply in a majority of cases.

  • Are handwritten forms really the only option?  If the form is available online, can the data be made “fillable” and then submitted directly to your database tables?  Can you let the user fill the form online and print, thus producing machine print and eliminating handwriting?  How about taking the data that a user entered and bar coding it (if the form must be printed rather than be submitted)?  Also helpful and sometimes overlooked:  prefilling form  data from your database through a merge process with a bar code index for retrieval of that same data.
  • Does your Capture software support ICR?  Intelligent Character Recognition (ICR) is what you need to read handwriting.  Optical Character Recognition (OCR) is much more common and is designed to read machine print.  Please don’t try to make it read handwriting – you won’t like the results!
  • Make sure the handwriting is constrained. Annoying? Perhaps. But making the person filling the form write in boxes sets you up for the most successful ICR results.  The catch phrase here could be “Curse the cursive”.  When a character is joined to another character it is faster to write.  However,  the ICR software really struggles to figure out where one character starts and another stops.  And here’s where recognition tanks.   With the real world example below, we can generally expect 100% recognition.

  • Ask for all caps handwriting. You can often tell your ICR engine to look for upper case characters only. This really

Continue reading

Vetting ABBYY ‘Keen Eye’ FlexiCapture at ImageSource

First off, ABBYY means “keen eye”, an apt name for a product that dynamically and automatically captures and processes widely disparate documents.  Powerful document recognition separates and classifies docs, and state-of-the art optical character recognition rips the data from the images.  I like the motto that pops up on screen – “take the data, leave the paper”.  I love doing just that, sending paper briskly off  to start its next recycled life.  It’s the greenest thing to do, especially when compared to  filling endless cabinets and long-term off-site storage facilities.

When you want to recommend, sell, support, and solve major customer problems with ECM software at ImageSource, due diligence mandates a thorough feature review and testing.  I’ll describe some of the steps I was involved with in this process for ABBYY FlexiCapture – but mine is but a single slice of the vet team pie.  Development teams and other engineering teams performed specific examinations to answer questions about integration, APIs, and more narrow capabilities to solve unique problems faced by eager customers.  Also, ImageSource staff with a variety of titles took a week-long training course with intensive labs.  Unfortunately I missed the class but was given the opportunity to spin up for a pre-sales demo last year, which was a lot of fun.

So here’s a peek at our process:

 Laptop Install

First things first!  I like to be able to run new software on my laptop whenever possible.  This frees me from all bandwidth and location constraints.  I can easily focus on the vet effort on a plane, down by the river, wherever and whenever.  ABBYY FlexiCapture has a convenient ‘Standalone Installation’ which gives you access to all the key components on one box.

 Obtain Sample Images from Client

In this case we gathered dozens of hardcopy invoices from a large international corporation.  The images were not pretty and included originals, copies, printed faxes, you name it.

 Ascertain Server Needs

After reviewing the ABBYY documentation we set the requirements for our labs – memory per server, disk space, software required, scan station requirements, scanner requirements, and required operating systems.

 Spin Up VMs

Thanks to Mike Peterson we had three servers up in no time.

Convening the Team , Locking Down the ‘War Room’

Gene Eckhart, Jeff Doyle and I  met in our Olympia office for a week.  Gene secured the war room where we periodically met with developers, project managers, engineers, and principals.  Most of the time it was the three of us banging away. Continue reading