Migrating from Stellent UCM & IBPM – A little foresight can alleviate a lot of trouble

Migrations from systems like IBPM to ILINX can be fraught with issues that can bite the unwary in very bad places. However, if you are aware of such problems, you can plan ways to mitigate them and have a successful migration in the end.

One issue we run into is documents that have a page or two with corrupt images. Perhaps when the page was first contributed to IBPM, a system or other type of issue caused the image to be corrupt or cease to exist. Either physical hardware or a software bug can be the culprit. The product we use for migration, ILINX Export, will flag this document as an error, skip it and move on to the next document in RECID order. Once the export is completed, these flagged documents have to be re-visited. Once a determination is made that an image is indeed corrupt, and the chance to recover it from backups is extremely remote, the document can be deleted or manually exported from IBPM without the corrupt image.

Another matter we’ve dealt with is related to non-tiff images. This category is “universal” type images, and includes PDF, DOC, XLS, MSG and a host of other file types that IBPM supports. There are options within the ILINX Export tool that will allow the export of these files types in their native format through the IBPM SDK. Or the export can be done through database manipulation that can directly access the image file and then “unzip” the universal file into its native format.

The issue that can be encountered here is twofold, and manifests itself when migrating to another repository. One, IBPM stores the native file zipped up with another file that contains metadata and has no file extension. When the document is unzipped there are two files, one with a valid file type and one without. Typically, backend repositories require file extensions, which are useful for performance, like displaying file type icon on the user interface, and a variety of other reasons. During the migration, importing to the backend may be impeded due to a lack of extensions on the metadata files. Secondly, if the extension of the universal file has been altered or damaged in storage, the file type may not be a standard that the new repository will accept. In any case, having your migration come to a screeching halt is something to avoid.

Awareness is the key. By proactively incorporating a response into your migration plan, you can eliminate much heartburn and anxiety. That is where the expertise and knowledge of a seasoned Optika / Stellent / Oracle integrator, like ImageSource, comes into play. We have helped many customers build migration plans that take these and other items into account, so the migrations are as smooth and worry-free as possible.

Oracle IPM 10g and Imaging 11g Migration: Part 2

A couple weeks ago I wrote a post about ECM migrations, with a focus specifically on moving content from Oracle IPM/Imaging to other destination systems—projects we’ve been performing a lot of lately. Our tools of choice for migrations are ILINX Export and ILINX Import, but if the destination ECM system isn’t supported by ILINX Import, there are other options. Almost every ECM system has mechanisms to do bulk or mass imports. ILINX Export provides many options to format the data so sometimes it is a matter of configuring the output to be in a format supported by the third party import application. Other times, utilizing these third party import applications may require a little development. Regardless of what’s necessary, we’ve never run into a destination system that we couldn’t work with.

There are multiple reasons we split the migration operations into two parts—export and import—flexibility being the biggest one. There are a lot more options when splitting the migration into two separate operations. Since we don’t modify the data on export from the source system, a snapshot can be taken for long term archival. Then on import, or pre-import, we can massage the data, perform file conversions, or augment the data by pulling additional data from an external source. Even though we split the migration up into two operations, they can be run in tandem so there is little effect on the overall duration of the migration.

One of the biggest concerns surrounding these migrations is the amount of time it will take. Performing tests in the actual environment is required because of how many variables go into the throughput of a migration. If the migration is estimated to take too long after initial testing, there are options to address that scenario, including:

  • Create a migration environment with instances of the source ECM system software on newer, more powerful servers, and restore the production data to these new servers in order to execute the migration from there. This has the additional benefit of removing any potential performance impact to the legacy production system for the duration of the migration.
  • Spin up additional instances of ILINX Export and/or ILNX Import to increase throughput. There will be a point when additional instances of the export or import process will not increase throughput—generally when a bottleneck restricts the maximum throughput that the source or destination system can achieve.

Recently, I had a customer that had set a hard go-live date that was just 60 days after project initiation for their new system. We had no problem meeting this requirement from a technology deployment standpoint, but our migration testing indicated that we wouldn’t be able to move all of their 25+ million documents in that time frame. In order to make the new system go-live date, we migrated the three previous years’ content first, then resumed with the older, remaining content. Since the vast majority of content to be retrieved would be from the previous year, the fact that the migration wasn’t 100% complete at go-live was a non-issue. This is an approach we’ve followed numerous times.

Once a migration is in full swing, auditing can be the most time consuming part of the process. ILINX Export and ILINX Import have very complete auditing capabilities, so while the migration is occurring, issues are immediately identified and can be addressed. We generally audit a couple different ways to confirm success. If only using ILINX Export, what is exported can be compared with what is in the source system to ensure all content was pulled out. When performing a complete migration, what is imported into the destination system is compared with the source system. Any migration can only be considered a success when it is proven that all the content was migrated, which is why we practice multi-step auditing during the migration.

By following our standard methodology for migrations and utilizing the technology we’ve developed over the years, we consistently perform reliably successful migrations. To read more about migrations, review my previous blog posts Oracle IPM 10g and Imaging 11g Migration and Steps for a successful ECM migration using ILINX Export.

If you have any questions about my blogs, or would like to discuss the possibilities for migration within your organization, please reach out to me or your contact at ImageSource to start the conversation.

John Linehan
Sr. Systems Engineer
ImageSource, Inc.

Oracle IPM 10g and Imaging 11g Content Migrations

One of the things we’ve always done a lot of here at ImageSource is migrations, it’s definitely one of our core competencies. For a little more information on our approach to migrations, you can review my earlier post here. Lately the migrations have been more focused because most of them involve moving content out of the Oracle IPM 10g or Oracle Imaging 11g products. Basically, Oracle IPM 10g has reached end-of-life and Oracle Imaging 11g was the terminal release for the product. So essentially the product line is dead. The IPM 10g product was something we worked with for many years so we have a wealth of knowledge on its ins and outs. IPM was a feature rich but older product stack and it was in need of a bit of an overhaul. However, when Oracle rewrote the product as Imaging 11g there were a lot of key and important features that didn’t make the cut.

Because of everything I’ve mentioned, businesses running on these particular Oracle ECM platforms have had to make decisions for their long term ECM vision or roadmap. I have worked with a number of clients on technology evaluations and the like to help determine their roadmap, but that’s a blog post for another time. One of the key pieces to any ECM roadmap for a company performing these solution changes is the migration of their content from the Oracle IPM 10g or Imaging 11g systems that they are replacing. Luckily we have to the tools and knowledge to make these migrations as straightforward as possible, the tools being ILINX Export and ILINX Import.

There are a number of options with ILINX Export but in short, we use it to export all content and metadata out of a source system for it to be migrated into whatever destination system is necessary. By default, ILINX Export retrieves the content from the source system in exactly the same format it was in when it was added to the original system. By exporting the content out in its native format, a customer can always keep a copy of the original data and any data manipulation or file conversions can be done downstream. ILINX Export does have the ability to convert files to PDF but I generally do image conversion when importing the content into the destination system. Utilizing our knowledge of the Oracle products, we have plenty of options when extracting the content from them. For example:

  • Only migrate certain applications.
  • Only migrate content created after or before a certain date.
  • Only migrate the content that falls within certain criteria: for example a specific business unit, a set of document types, or virtually any criteria that can be identified with the content metadata.
  • Split the content up so content that meets certain criteria goes to one destination and content meeting other criteria goes else ware.
  • Retain the IPM or Imaging annotations. These can be flattened into the documents, but I only recommend that in certain instances. If the client is migrating to ILINX, we can migrate the annotations as an overlay into the new ILINX system.
  • There are many options on the format of the data when it is exported from the source system. ILINX Export can output the metadata to text or XML files with complete control of the format, delimiter, field order, layout, and size of those files. That flexibility allows for the creation of input files in a format that can work for just about any destination system.
  • The metadata can also be written directly to SQL to support long term storage or manipulation if necessary.
  • Scheduling the export to run during off-hours to keep load off the servers while clients are using the old system.
  • Detailed auditing of the entire process to help with reporting, compliance, and troubleshooting.
  • Many more.

Once all the requirements surrounding the migration have been defined and execution has started, one of the next steps is importing that content and metadata to the destination ECM system. I’ll go over next steps along with more on the export portion of a migration in a follow on post. If you have any questions about migrating content out of a system or ILINX Export reach out to us for a demo or discussion.

Transferring ILINX Release Configurations When Upgrading

Starting with ILINX Capture v6, the Release configurations are stored within the ILINX database. In ILINX Capture v5x, the ILINX Release configurations were stored in XML files on a disk. ILINX Capture called ILINX Release using a SendAndReceivedReply IXM. The change to store the settings within the ILINX database is very useful for a number of reasons: Release settings are part of the batch profile allowing for simpler migrations between environments, Release is much easier to configure, all configurations are in the database, etc. However, this change can create some extra work when upgrading from ILINX Capture 5x to ILINX Capture 6x. Because of the different architecture, ILINX Release needs to be completely reconfigured for the existing batch profiles. In addition, the Release XML doesn’t change, but there is a shortcut that can be taken. After you have upgraded ILINX Capture to v6, you’ll notice a new IXM in the palette: ILINX_Release_IXM_Icon

The existing ILINX workflow will likely have a SendAndReceiveReply IXM on the map that the 5x version of ILINX Capture used to call ILINX Release. Most likely, it would look like this:
SendAndReceiveReply_IXMTo configure ILINX Release for ILINX Capture 6x, the SendAndReceiveReply IXM will need to be removed from the map and a Release IXM must be dragged onto the workflow map in its place. Once the new Release IXM is on the map, it will need to be configured. This is where the shortcut can be taken. Instead of having to manually enter in the correct URLs, map the metadata values, and configure any other settings, do this:
Configure and save Release with some place holder settings: I normally leave the settings at default and enter in the bare minimum:

  • Job Name
  • User Name
  • Password
  • Batch Profile
  • Release Directory

Once ILINX Release configuration is saved and the workflow map is published, there will be a new entry in the ILINX Capture database Capture WorkflowAppSettings table. The CaptureWorkflowAppSettings.SettingsXML column is where the Release configuration is stored. Now it’s time to update the SettingsXML column with the XML from the ILINX Release 5x job settings file. The Release job should be on the ILINX Release 5.x server at c:\ProgramData\ImageSource\ILINX\Release\Settings\Jobs. The only caveat here is to be sure to place single quotes around the XML content. Here is what the SQL update statement would look like:

update [ILINX CAPTURE DATABASE].[dbo]. [CaptureWorkflowAppSettings]
set SettingsXml = ‘COPY AND PASTE ALL TEXT FROM 5.4 OR PRIOR RELEASE JOB SETTINGS FILE HERE’
where settingsID = ‘APPROIATE ID HERE’

Following this procedure can save some time if upgrading an ILINX Capture 5x system that has a lot of batch profiles. A lot of the time spent on the upgrade could be in the ILINX Release configuration. If I was upgrading a system with only a few batch profiles, I would probably just reconfigure them. If I was upgrading a system with a lot of batch profiles, I would go through the above steps to save some time.

John Linehan
Sr. Systems Engineer
ImageSource, Inc.

Failover Cluster Troubleshooting

There’s nothing quite like logging in to a customer’s system first thing Monday morning only to be greeted with this:

Cluster_report

I discovered this when I wasn’t able to log into the customer’s ILINX Capture implementation. The logged error (failure to locate the SQL Server) led me to take a look at the SQL Server’s configuration to confirm that its service was not running on either node of the cluster, and the error I got when trying to start that (a clustered resource could not be activated) led me to check on the clustered resources themselves.
Continue reading

Implementing SQL FILESTREAM Part II

Last month I wrote about enabling SQL FILESTREAM with ILINX Content Store. After discussing this with a few people, I think I should share some more information and reiterate a couple points.

For Existing Applications:
As I mentioned before, the decision to enable FILESTREAM should be done during the planning phase. If you perform this process on an application with a lot of content, it can be a very time costly endeavor with a big performance impact to the server. Also, after the move from BLOB to FILESTREAM, you could have a fragmented database. The BLOB to FILESTREAM process can definitely be done on an existing system, just be sure to plan accordingly and allow for sufficient time.

After step #10 of my previous blog post (all the data is copied and you have deleted the BLOB column), you will notice that the database file size hasn’t decreased. This is remedied easily enough be executing a DBCC CLEANTABLE command. The DBCC CLEANTABLE command will reclaim the space from the dropped variable length column. For example, if your database is named ILINX_CS and your application is named Sample Application, the query to do this is:

DBCC CLEANTABLE ('ILINX_CS','[dbo].[Sample Application]',10000) Continue reading

Storing content outside of SQL Server for ILINX Content Store using SQL FILESTREAM

By design, ILINX Content Store stores documents within the SQL database as BLOBs. There are many advantages to this design (security, performance, etc.) but sometimes there is a reason to store the documents outside of the SQL database. SQL Server has a method to do this called FILESTREAM. FILESTREAM integrates SQL Server with the NTFS file system by storing varbinary(max) data outside of the SQL database. FILESTREAM uses the NT system cache for caching file data: this helps reduce any effect that FILESTREAM data might have on Database Engine performance. The SQL Server buffer pool is not used; therefore, this memory is available for query processing.

One of the main reasons to implement FILESTREAM would be because your documents are generally larger than 1MB in size, storing them outside the database can have a performance advantage. If these are TIFF documents, then this 1MB threshold would be on a per-page basis. This is due to how ILINX Content Store stores TIFF documents. By design, ILINX Content Store splits multipage TIFFs into single pages to allow for users to perform actions on single pages of a document: things like a reorder of pages, single page delete, or rotation. Continue reading