Friday, September 19, 2008

Solution Demos: Catch 22

Damned if you demo, damned if you don’t, especially if your client is new to Documentum. You can talk about functionality, show the vanilla installation of Webtop, and still get surprised looks when you show the final solution. On the flip side, you can demo the solution when it’s not fully tested and have to dance around errors and incomplete development.

Demos of a client’s project are essential if the client reads the Functional Requirements Spec and the Tech Design without too many comments. This means they don’t understand and/or don’t have enough time to review it. This can be frustrating, but if you demo the solution in front of those same future users, you’ll get invaluable feedback and will eventually have a more successful launch of the solution.

Steps to successfully demonstrate your solution:

  • Test the part of the solution that will be demo’d
  • Plan talking around any part of the solution that is not completed
  • Have an agenda of what will be shown during the demo
  • Repeat to the clients/users that this is not a finished solution, that there will be bugs
  • Try to have fun with your audience, this should not be too serious
  • Have other technical resources there to take notes and comment on questions that pertain to their customizations
  • Listen you your client’s comments and take their feedback seriously
  • Make notes of comments that could form future opportunities, like “does this mean we have to sign on to another application each time we want to access it?”. This could mean that single sign on is in the future. Or, “I already fill out this information in the database, why do I need to do this twice?” This could mean an integration possibility to ease the transition to using Documentum, not to mention make finding content easier with more integrated attributes.
  • Schedule a few demos to reduce the risk of disillusionment after an error causes one demo to fall short of expectations.
  • Try to impress the client each time by showing something extra in the product that they might not have seen.
  • Whatever you do, do not take screenshots and dummy up the solution. This will lead to more questions and anxiety over the progress of the overall solution.

Friday, August 22, 2008

From Functional Requirements to Tech Design

The most important part of designing a Documentum solution is to understand and translate the business and functional requirements into Documentum configurations and customizations. You have to create maps of all of the major components of requirements, matching each requirement to solution. You might use a traceability matrix or you might organize your specification similar to the functional requirements. It depends on the project scope and business expectations. My opinion is that if you need a traceability matrix the solution is too wide in scope or there are some trust issues with the business.

Document Types to Object Types

Scope of project phase
When reviewing the list of doc types in the functional specification do not take for granted that these are the final types of documents. A business analyst cares about capturing all of the possible documents, not necessary thinking about the scope of the project when he does so. Be sure to explicitly describe the scope of the project in your design. For example, the doc types may cover all content in the enterprise in order to create a comprehensive object model, however the scope of the current phase of the project may be a subset of those types. Do not over commit to the business what content will be covered in the first phase. When you demo the functionality, only show that phase’s content being imported and published. Try to reduce scope creep by repeating the project’s phase expectations and assuring that all content will eventually be brought into the system. This is a good opportunity to talk about a roadmap of the phases required to fully actualize the business’s total content management with Documentum.

Model review and rework
Roll up attributes as you work through each object type, finding commonalities across object types. Think about all the UI, TBO, ACL ramifications of your design. If the object model is huge, try to consolidate, look into using aspects as an alternative to many object types or many attributes.

Implementation design
Depending on the scope, you may be releasing the object types in phases. If this is the case, make sure the phase’s object types cover all of the document types, business processes and security requirements. You may want to prototype some of the object types and their relationships to importing and folder linking. For one-offs in a dev environment, Documentum Application Builder is the fast way to do this. For more methodical approaches, Composer is more portable and best practice going forward.

Metadata to Attributes

Match with OOTB attributes first
Don’t reinvent the wheel, search for a suitable existing attribute first then create a custom one. If there’s a requirement for a comments field, use the log_entry attribute. If there’s a status field, use the r_version_label.

Source of values
  • Value assistance could be used for the easily maintained attribute values, which would be maintained using Doc App Builder or Composer. This uses docbasic.
  • You could get a custom object to query for attribute values, which would be maintained by the business via the UI.
  • You could query registered tables or views which could integrate with linked database tables
  • You could read and parse a properties file maintained on the server.


UI constraints: repeating attribute, large text boxes
Think about how the users will react to the WDK repeating attribute dialog box. For large strings, you’ll have to put a text area in the forms.
Taxonomy to Folder Structure

Search ramifications
Giving context to search results is an important way to transfer knowledge. Repeating key attribute/value pairs (for searching) with folder hierarchy labels (for browsing) usually makes sense. If it doesn’t, maybe you are trying to put too much metadata on the object.

Simplify where ever possible
If you are going deeper than four levels, you better have a good reason. At a certain level there’s a “project” or “deal” or “product” level which repeats over time. This level should be closer to the top level, have some automated way to build new folders as new “projects” are added, and have an automated or easy way to archive, thus reducing the clutter of old content over time.

Business Process vs. share drive organization
Folder structure may be organized in many ways. A business process oriented structure may make more sense than strictly following the share drive approach of silos of data structured by business group. Avoid organizing content by date only. If there’s an end of quarter date use it, but don’t rely on it solely to find content.

Mix of common, evolved classification with more structured rules
Chances are good that you are moving from one established way of structuring content to another. Doing this right so the users know what they are doing in the new system without hours of training can be difficult, especially because the old way is usually not the best way. If the company is large, your headache is merging structures, if the company is small your headache is reorganizing silos of information. Sometimes both.

Modify according to folder/object type map
As the reality hits the road and you’re building folders and automating the linking of content to them, expect changes to the design, additions to the attributes needed, and delays to build and customize the right way.


Folder Structure mapping to Object Type plus Attributes

  • Match Object Type and Attribute Key/Value pair Combinations to Folder paths
  • Prototype the search and browse functionality
  • Do subfolders follow the document templates?


Business Process to Workflow
This is where you’ll read the functional requirements of a process of getting content approved and have to figure out the specific activities involved for a workflow. Sometimes, Common business processes may not translate into a formal workflow. In this case, describe the steps of using the application and how the work can get done without a formal workflow. You’ll have to determine how to create decision point that splits the workflow if there’s a yes/no decision to be made. A common example of an auto-activity is the requirement to send a notification email with a link to the workflow’s package

Importing content
Check all assumptions involved with importing content into the system. It might be assumed that all content related to the first phase should somehow get into the system. The problem with this is that the contributors of the related content may not be ready to participate in the Documentum system. They may not be in the first phase.

Sunday, May 18, 2008

The Art of the Custom Documentum Object Model

Before designing a Documentum Object Model, you’ll need to take a litmus test of the culture of trust between the sponsor of the project and the IT organization. If the company is large there will be multiple levels of politics. You’ll need to judge from how the requirements gathering sessions went to figure out your approach to the object model design. Part of gathering requirements is educating your client on the types of objects that make up the content management system without getting too technical and wrapped up in explanations that are too long winded and lost on the client. During this education, ask questions like:

  • How do the different business units communicate with each other?
  • Do they share information, is there emphasis on security?
  • Are there databases that they use to look up information?
  • How effective are Marketing and Sales at driving the accumulation of knowledge into content published to consumers?


The answers to these types of questions help determine the meaning of the object model’s hierarchy levels. As an architect of the content management system, you are the only one who is qualified to make non-biased design decisions and hopefully would not have an agenda in your design. The most common object model hierarchy has an enterprise object as a child of dm_document, dm_folder, etc. and then children objects underneath it. The ramifications of your design magnify at the second level of the hierarchy. Here are some scenarios of what happens when the architect gets influenced in the wrong ways:

Forced to design without all the requirements
Have you been given enough time to really design a model that reflects the whole organization? If the IT Manager on the project says, “Don’t worry about the whole organization, we have three business units in front of us now, this is all we need to worry about for now,” you know there will be issues with the design if you create a model without knowing the bigger picture. Scalability, performance and reporting all suffer when a design is not drawn from the full foundational background.

Influenced by the wrong folks in IT
Most notorious for screwing up an object model would be the turf war database architects who don’t understand object-oriented design. They demand to know what the relationships are between all of these tables, where’s the schema, etc. They will flip out over table unions and joins if you tell them too much. They could care less how much the object model is integrated with the UIs and security. So when you say that the enterprise level attributes are reserved for only the most far reaching attributes across the whole company like retention period, stick to your guns when they push back and shake their heads. If you need more than 3 levels of custom object levels, make your case is as sound and simple as possible, try to include monetary impacts on potential customizations if it isn’t followed.

Confusing Department Security with Content Functionality
Most companies are set up by departments. Each department shares some information and restricts access to the rest. This doesn’t mean that the object model has to follow the org chart of the company. It may make more sense in the long run to figure out the function of each content type in the enterprise and really study the use cases of the content that is most critical to the company’s success. For example, for a government organization which has vital records (like birth certificates) to scan, index, and store, it makes sense to design an object model around function, in this case vital records, instead of the name of the agency that keeps track of the vital records. Agencies and departments will reorganize over time, functions such as birth records will not.

One repository vs. many
In many cases multiple repositories designed around one global registry makes sense. There’s more flexibility built into this design through out the technology stack, as well as with the changes in the business units over time. This however does not mean that each repository should have autonomy in its object model design. In fact, there should still be an enterprise level custom object for each object type being customized. The object model should be the same in each repository. You’ll have to be more diligent with migrating docapp archives between repositories, especially with the install options.

Decoupling Internal Business Process and External Publishing
If the end goal of the content is to publish it to a portal, there will be conflicts between the internal structure of the content (how the business works with each other) and external structure (how consumers view and search the content). Do not underestimate how long it will take to weed out the navigational systems for each side, the security and identity management, the functional driving attributes, etc. In the best case there will be enough decoupling of the objects and their attributes that the design can provide decoupled and scalable solutions to the conflicts between content management and content publishing.

Some General Rules of Object Model Design

  • Determine if content types are functional or departmental in nature
  • The security model of the repository whether it’s user, object, or folder based may have an overriding influence
  • Build in flexibility to enable the object model to expand in all directions
  • Move attributes that span all of object types up one level if possible

Saturday, February 9, 2008

Polluting the ECM Ecosphere

You know those email spams that fill up your inbox? Well what about the trail of junk that Content Management Systems leave behind as they forge ahead solving complex business problems?

As I’ve work on large, small, and medium sized CMS’s, I’m always amazed at how polluted they are with logs, audit files, orphaned work items, queue items, reports, ACLs, versions, etc. The out-of-box clean up jobs focus on getting rid of unwanted versions, orphaned content, logs, and queue items, to an extent.

The problem is, with some business reporting requirements, they are too good at deleting files that may be of use for historical analysis. Business users get nervous when you say “we have to clean up things up to maintain performance”. They say, “Can we wait for a while until we really need to do this? What are the risks? What if you delete something that we need at the end of the quarter, or year, or in ten years?”

M. Scott Roth’s “Seven Jobs Every Documentum Developer Should Know and Use” article details the use of seven Documentum jobs: DMClean, DMFilescan, LogPurge, ConsistencyChecker, UpdateStats, QueueMgt, and StateOfDocbase. There’s a job to trim versions, but for whatever reason Scott didn’t include it in his job list. These jobs are all essential to keeping your repository clean and performing the way you expect, but what do you do about ACLs, workflow history, and versions if the deletion is not specific enough?

Content pollution is rampant in all industries and is a direct result of rushed design and over ambitious technical solutions to relatively simple business problems. Take a regulated content management system for example. This system most likely creates new versions of content for every change to its file or its metadata. There also could an audit trail which records every version’s change, a backup of the file system and the database for nightly and weekly data security, disaster recovery with off-site replication, multiple renditions, and multiple language versions.

The upshot is that the proliferation of versions and logs, and backups is great for storage “archive” companies, but can lead to confusion and a false sense of security. Who’s making the design decisions? Most likely it’s a business user who doesn’t want change, thus forcing an over worked IT Manager and ECM Architect to work out the solution which puts garbage control on the back burner. “We’ll deal with logs and versions later, right now we have to roll out the project on time and within budget.”

So how do we design with conservation in mind? For one, we think a year or two in the future and try to extrapolate the effects of thousands or millions of scraps of content floating around in the CMS, slowing done queries and filling up the more expensive hard disk space. Here are some more ideas:

Design to recycle:
For each log and object type ask how will this be created, versioned, and disposed of. What is the purpose of this content? How long will it be useful? Efficiency Think of conservative approaches to logging events, to versioning content (regardless of OOTB functionality). Upgrades, New Development, and Performance Testing Logs, database temp space, temporary migrated content files can pile up everywhere during special testing and migrations. These files are often “hidden” and sometimes move along to production systems only to clog things up later.

Site Cache temp files and orphaned site files:
Site Cache Services is notorious for leaving stray temp files, logs, and orphan folders all over the place especially during fail publishing attempts.

Docapp messes:
Docapps, when not performed carefully, can leave references to old lifecycles, workflows, object types, and attributes. These orphaned objects can not only clog the system, they can corrupt production environments with hardcoded references to filestores and none existent owner names.

Repository and LDAP synch logs:
Every time an LDAP synch job runs logs get stored in the repository and on the Content Server file system. Every time a repository starts up a new log starts for it. These logs fill up the server file system which is usually not a large disk.

DFC traces:
During development and testing, trace logs are essentially to tracking down bugs and slow performance. These files are usually forgotten and build into huge space choking surprises when you least expect it.

Environments such as Sandbox, Dev, Test, Performance, Staging, Prod, DR, Off shore, Business Continuance:
All these environments double, triple, xduple the amount of disk space needed for solutions. Think about ways to migrate subsets of needed content without versions perhaps. Reduce logging in environments not used very often or that are dormant for a period of time.

Integrations with other applications:
Many integrations of systems require multiple renditions of content for presentation. For example, email messages from Outlook get saved as .msg files in Documentum. Even when EMC’s email Xtender is installed, an integration with Outlook requires copies of the original email to be imported into Documentum’s repository.

Friday, January 4, 2008

Queue Item Maintenance

Issue:
The recording of task events are tracked in tables, namely the dmi_queue_item table. These build up proportionally to the number of tasks executed. As time goes on and performance potentially slows down, these queues will need to be deleted.

Requirements:
Periodically delete old records from the dmi_queue_item table, but keep track of a document’s workflow history for a certain amount of time beyond the dmi_queue_item cleaning.

Solution:
The first thing to determine is whether the out-of-box dm_QueueMgt job can do what’s required. This job runs a method that can delete queue items older than a cutoff date. This is useful, however, we want to keep the workflow history of a document. This is also useful because this table keeps a record of many other events in the repository which need to be deleted on a scheduled basis. The solution was to create a custom table which holds the requirement queue items to maintain the workflow history of documents, and to create a new job and method to populate it before as part of queue management.

Solution Details:
First, create a custom table using DQL (Note: this table has the same columns as the dmi_queue_item table):

UNREGISTER TABLE wf_history_s
EXECUTE exec_sql WITH query='DROP TABLE wf_history_s'
EXECUTE exec_sql WITH query='CREATE TABLE wf_history_s
(
r_object_id VARCHAR2(32),
object_type VARCHAR2(32),
id_1 VARCHAR2(32),
string_5 VARCHAR2(200),
string_4 VARCHAR2(200),
string_3 VARCHAR2(200),
string_2 VARCHAR2(200),
string_1 VARCHAR2(200),
workflow_id VARCHAR2(32),
policy_id VARCHAR2(32),
registry_id VARCHAR2(32),
audit_signature VARCHAR2(255),
audited_obj_vstamp INTEGER,
user_name VARCHAR2(32),
time_stamp_utc DATE,
audit_version INTEGER,
chronicle_id VARCHAR2(32),
controlling_app VARCHAR2(32),
object_name VARCHAR2(255),
audited_obj_id VARCHAR2(32),
version_label VARCHAR2(32),
acl_domain VARCHAR2(32),
attribute_list_id VARCHAR2(32),
host_name VARCHAR2(128),
user_id VARCHAR2(32),
i_audited_obj_class INTEGER,
event_source VARCHAR2(64),
event_name VARCHAR2(64),
r_gen_source INTEGER,
owner_name VARCHAR2(32),
time_stamp DATE,
event_description VARCHAR2(64),
session_id VARCHAR2(32),
current_state VARCHAR2(64),
application_code VARCHAR2(64),
acl_name VARCHAR2(32),
attribute_list VARCHAR2(2000),
i_is_archived VARCHAR2(32),
id_5 VARCHAR2(32),
id_4 VARCHAR2(32),
id_3 VARCHAR2(32),
id_2 VARCHAR2(32)
)'

REGISTER TABLE dm_dbo.wf_history_s
(
r_object_id CHAR(32),
object_type CHAR(32),
id_1 CHAR(32),
string_5 CHAR(200),
string_4 CHAR(200),
string_3 CHAR(200),
string_2 CHAR(200),
string_1 CHAR(200),
workflow_id CHAR(32),
policy_id CHAR(32),
registry_id CHAR(32),
audit_signature CHAR(255),
audited_obj_vstamp INT,
user_name CHAR(32),
time_stamp_utc TIME,
audit_version INT,
chronicle_id CHAR(32),
controlling_app CHAR(32),
object_name CHAR(255),
audited_obj_id CHAR(32),
version_label CHAR(32),
acl_domain CHAR(32),
attribute_list_id CHAR(32),
host_name CHAR(128),
user_id CHAR(32),
i_audited_obj_class INT,
event_source CHAR(64),
event_name CHAR(64),
r_gen_source INT,
owner_name CHAR(32),
time_stamp TIME,
event_description CHAR(64),
session_id CHAR(32),
current_state CHAR(64),
application_code CHAR(64),
acl_name CHAR(32),
attribute_list CHAR(2000),
i_is_archived CHAR(32),
id_5 CHAR(32),
id_4 CHAR(32),
id_3 CHAR(32),
id_2 CHAR(32)
)

update dm_registered object
set owner_table_permit = 15,
set group_table_permit = 15,
set world_table_permit = 15
where table_name = 'wf_history_s'

Second, create a custom Documentum method to be executed by the custom queue management job. This class should have the following methods and logic:

a. Populate Workflow History Table according to criteria. Here’s an example dql:

"insert into dm_dbo.wf_history_s " +
"(r_object_id, event_name, time_stamp, user_name, audited_obj_id, string_4, workflow_id, string_3) " +
"SELECT '0000000000000000' as r_object_id, task_name as event_name, date_sent as time_stamp, sent_by as user_name, r_object_id as audited_obj_id, name as string_4 , router_id, task_state as string_3 " +
"FROM dmi_queue_item " +
"WHERE r_object_id not in (select audited_obj_id from dm_dbo.wf_history_s) " +
"AND router_id != '0000000000000000' " +
"AND date_sent < DATEADD(Day, -"+sCutOffDate+", date(today)) " +
"AND delete_flag = 1";

b. If the Workflow History Table gets populated successfully, delete the dmi_queue_item rows according to criteria. Here’s an example dql:

"DELETE dmi_queue_item objects " +
"WHERE router_id != '0000000000000000' " +
"AND date_sent < DATEADD(Day, -"+m_cutoff+", date(today)) " +
"AND delete_flag = 1";

c. Write the job report to the repository.

Third, create the custom queue management job.

Thursday, December 13, 2007

Documentum Workflow: Reporting History by Document

Issue:
Tracking workflow status by document from Webtop’s Properties/History tab.

Requirements:
Allow a user to select a document in Webtop and click on Properties/History to view the document’s workflow history.

Solution History:
Webtop: Out-of-box workflow functionality which requires running the dm_WFReporting job and auditing events.

Report: Properties/History
Location: Select Workflow task and go to Properties/History
You’ll get a list of the work activities, there date/time stamps and who performed it.

Report: Workflow Reporting
Location: Tools/Workflow/Workflow Reporting
Attributes: Workflow Name, Status, Active, Task Name, Performer, Supervisor
Edit Report:
User – select a user
Document – select a document by drilling down folders
Template – select a workflow template
Show overdue, all, running, or completed workflows

Report: Historical Report (Process or User)
Location: Tools/Workflow/Historical Report/Process or User
Form options:

  • Display statistics for business process running
  • From: To:
  • Include only process where these conditions are met
  • Process Template name contains: type in text
  • Or User contains: type in text
  • Workflow Supervisor is: select user
  • Duration: select operater and type in days, hours, minutes
  • Cost: select operator and type in number
  • Location is: select folder

WorkQueue Monitor
Location: Select Workflow task and go to Properties/History
Functionality: You’ll get a list of the work activities, there date/time stamps and who performed it.

Solution:
Create a custom Package_History table
Purpose: This is necessary because the link between the workflow instance and the package id (the content id) is lost after the workflow is complete. This provides the track record of workflow’s packages during the workflow activity and after the workflow is complete.

Create a custom Workflow_History table
Purpose: To keep track of workflow queue items. We decided to store all workflow related queue events in a custom table to assure that the historical information for the workflow activities would not be deleted by the queue management job.

Query the workflow history based on the following steps

  1. Look up the workflow instance ID from the custom package history table based on the Object ID.
  2. Select attributes from the custom workflow history table
  3. Union these results with a selection from the dmi_queue_item table

Present the User with the following information about the document’s workflow history
Date: Timestamp of action
Work Queue: Name of the work queue or activity
Performer: Name of the user who performance the task
Action: The event status of the activity

Changes to Webtop
Component changes
· Make a copy of the history_component.xml to the custom folder and extend for the original configuration file, and point the behavior to the new class
· Set “String_3” to true: true so it will show up on the jsp page.


Create a custom component behavior class
The main customization is with the query string method:

protected String getQuery(String strVisibleAttrs, ArgumentList args)
{
if(m_strSelectedVersionObjectId == null)
m_strSelectedVersionObjectId = m_strObjectId;
String strWhere = m_strQueryConditionFormat;
strWhere = StringUtil.replace(strWhere, "{r_object_id}", "'" +
m_strSelectedVersionObjectId + "'");
StringBuffer buf = new StringBuffer(128);
buf.append("SELECT ");
buf.append(strVisibleAttrs);
buf.append("'1' as dummy");
buf.append(" FROM dm_dbo.wf_history_s WHERE ");
buf.append(strWhere);
buf.append(" OR workflow_id in (select r_workflow_id ");
buf.append(" from dm_dbo.package_history ");
buf.append(" where r_component_id = '"+m_strObjectId+"' ) ");
buf.append(" UNION ");
buf.append(" SELECT sent_by as user_name, task_name as event_name,
task_state as string_3, date_sent as time_stamp, '1' as dummy ");
buf.append(" FROM dmi_queue_item");
buf.append(" WHERE router_id in ");
buf.append(" (select r_workflow_id ");
buf.append(" from dm_dbo.package_history ");
buf.append(" where r_component_id = '"+m_strObjectId+"' ) ");
buf.append(" ORDER BY 4 ASC");

System.out.println("Query: "+buf.toString());
return buf.toString();
}

Component Presentation Changes
The history.jsp file is copied and moved the custom folder in /custom/webcomponet/library/history

Friday, December 7, 2007

Documentum Workflow: Queue Management

Overview
In Workflows, each work item or task has to be managed, maintained and tracked. As workflows are executed and performed, there are certain aspects of managing these tasks that are not straight forward and are definitely not offered out-of-the-box with Webtop.

Management:

Issue:
Work queue tasks could be acquired then left unattended unless a scheduled job runs to determine whether the user who acquired the task is still using the system or has left for the day.

Solution History:

First Attempt:
The first fix was to customize the logout functionality of Webtop to set the work item (task) to the “putback” auto-activity. The “putback” auto-activity then gets the work item’s sequence number and subtracts 1 from it and sets the work item back to the previous activity.

First Attempt Issues:
This does not take into account user’s who X out of their browser without logging out. It also doesn’t scale.

Follow Up Attempt:
We decided to create a scheduled job which would execute a java method that “unacquired” the work items, like the “unassign” functionality in the Work Queue Monitor page on Webtop. This required some reverse engineering of the existing unassign class to figure which services and api’s were used and the objects and jars required. This also required looking at DA’s User Session functionality and bringing that in as well. Here are the components of this solution:

Documentum Objects
dm_job
dm_method

Java classes

Class: public class TimeoutPutBackMethod implements IDmMethod

Methods:

execute(Map params, OutputStream output)

- This is the main dm_method execution method which excepts parameters sent from the job and an output stream which ends up as a report written to the “Temp/Jobs// log files.

getAllActiveSessions(session, output)

- This method uses DfSessionCommand object to get all of the active sessions on the repository.
- This also queries the a_held_by (Username who acquired it) values of the work items to figure out which work items should be unassigned.

unAssignWorkflowTasks(IDfSession session, String UserName, String sDCTMDocbase, OutputStream output)

- Calls the getWQName method to lookup the work queue name based on the activity name.
- Calls UnassignQueuedTask.unassignTasks

getWQName(IDfSession session, String ActivityName)
- queries the dm_activity table looking for the performer (work queue name) based on the activity name.

Class: public class UnassignQueuedTask

Method:
unassignTasks(IDfSession m_session, String m_taskId, String m_queueName, String m_docbase, OutputStream output)

- This method constructs a work queue manager service and work queue object to unassign the work item.

Non-standard APIs used
import java.io.OutputStream;
Used for outputting strings to a job report file.
import com.documentum.services.workqueue.IWorkQueue;
This is used to construct a workqueue and unassign it
import com.documentum.services.workqueue.IWorkQueueMgmt;
This is used to construct a workqueue manager service
import com.documentum.mthdservlet.IDmMethod;
Used to implement this method and to be able to execute it as a dm_method object
import com.documentum.admin.commands.DfSessionCommand;
This is used to get all of the active sessions in the repository