One of my previous posts
demonstrated several ways to test your BPM processes with unit testing.
The most flexible and useful by far is by mixing PAPI with Java JUnit
test classes to automate specific test cases. This way you can exercise
all the paths of your process and be sure everything works as expected.
This article will show you how to do that.
This article explores a solution to a problem with that methodology.
Namely, the JUnit tests on BPM processes with interactive forms can't be
completely automated. You would have to pause the test for a bit, while
you open up Workspace, find the instance and execute the form. This
procedure is prone to human introduced variations in the test which
can't be reliably reproduced.
A BPM process deployed to either the Studio or Enterprise engine sends
and receives data in many ways. The interaction can be done through
database calls, webservice calls, JMS topics or queues, files,
interactive forms, the list goes on.
When testing with JUnit/PAPI you can stub all of these out with local
copies of files, and databases. JMS topics and queues can be filled with
expected data via JUnit. Webservices can easily be mocked out with
SOAPUI. Getting your test cases with reproducible and reliable test
But as long as there are interactive forms in your process, you can't
automate further. And, because you have to go through Workspace, no
back-end, JMeter, or HTTP trickery can reliably interact with the forms
to get and send data to the process.
The Solution: WAPI
To overcome these shortcomings, we can use WAPI to send the data we want
directly to Workspace. Thus, our test cases can be completely automated
with predictable data.
WAPI: ALBPM / OBPM 10g's Web API
What is WAPI? It is ALBPM / OBPM 10g's Web API. Meaning, you can
interact with Workspace via HTTP calls. You can do things like executing
Workspace tasks, getting and filling out forms, getting process
diagrams and audit trails, and more. Unfortunately, there isn't much in
the way of documentation on it. All that we really have to go on is a
few examples provided for is in the
file. This file shows you how to do a few things with WAPI. But that was
enough to get me started. And now you have a fairly complete example of
how to apply this technology to automate your JUnit test cases for your
But, WAPI is a Web Interface. Meaning it is meant to be called from a
browser. Like Workspace, you have to provide valid login credentials in
order to do anything interesting. In fact, WAPI is a part of the
Workspace Java Application. So you may be wondering how Java and JUnit
are able to use WAPI to provide form inputs for interactive activities.
I have done a bit of socket programming in my day. So I'm familiar
with the HTTP web protocol and how browsers interact with servlets. So I
wrote a few static methods that you can use to connect to workspace in
Java using the normal login credentials, manage the session, and post
form values over the HTTP protocol. To Workspace, it works just like a
All of this is in static variables in the Tools.java class. To use
this class, you just provide the HTTP POST values that are in the form
you are presenting in the interactive activities. In the example
projects, I got this working for the declarative forms, Object
Presentations, and custom JSP's. The static methods should do most of
the work for you. The tricky part will be properly formating the form
Setting the Form Values
The form values are put in the config/*.properties file along with all
of the other PAPI/WAPI settings. This allows you to store posts several
different forms and several posts for the same forms to satisfy any test
criteria you have.
I provided two sample .properties files in the config folder to show
you how to connect to the Studio engine as well as a WLS engine using
The form's values use the following syntax:
Notice the "%20" for the space. This is typical URL encoding, using
hex values for special characters which would confuse parsers. The field
names are the "name" attributes of the HTML INPUT tags. For example,
the INPUT tag: "<INPUT type=text name=FieldName1 value=FieldValue1
/>" the name is "FieldName1". For declarative forms and object
presentations, this can be hard to decipher. I usually use this shortcut
to generate the post-value strings I put in the .properties file. But
to do this you need Firefox and the Firebug plugin.
- Open Workspace in Firefox.
- Open the form you want to automate.
- Fill out the form.
- Then, just before you submit, open the Firebug panel and select the "Net" tab.
- If the "Net" tab is not enabled, enable it.
6. Click on the "Clear" button to get rid of any previous network traces.
7. Click on the "Persist" button to make sure the traces you want don't get cleared
8. Make sure the "All" button is depressed
9. Press the button or link on the form that saves or closes the form and returns control to the screenflow or process.
10. In the Firebug Net panel, you should see some network traces
appear. Look for the first "POST" line, and click on the plus button
next to it.
11. Open the "Post" sub-tab, and scroll down until you see the "Source" section.
12. You will notice that the text in this section match the syntax I
described above, URL encoding and everything. So, just select the entire
text of the "Source" section and paste it where you like it in the
This procedure makes it much easier to generate the test data for
form posts. And it is less error-prone because it uses the actual data
your form used to submit to Workspace.
Getting the Source
The working examples are available at the following links. The archives
contain both the BPM project export and the Java source code.
Using the Examples
After you uncompress the source code, you will need to provide the
necessary ALBPM or OBPM 10g jar files to run the examples. These are
your typical jar files you would use for any PAPI program you would
write. They are available in your BPM Enterprise installation folder
<BPM_HOME>/client/papi/lib folder. There is a text file in the lib
folder of the example which lists explicitly the jar files I used to
get the project working.
Second, if you are connecting to a BPM Enterprise engine, you will
need to put that engine's directory.xml in the config folder. This file
contains database connectivity information so the Java program will know
where all of the BPM goodies are (the BPM Directory database). I
usually get if from the <BPM_HOME>/webapps/workspace/WEB-INF
folder of the J2EE installation.
Next, edit the .properties file you are going to use to suit your
environment. Most of these values will be easy to decipher. But I will
go over a few of the ones which might give you trouble. I provide two
.proerties files. One for the typical connection to a J2EE Enterprise
installation of BPM. Another for connecting to the BPM Studio engine.
The Studio engine connection does not need to know where the
directory.xml file is located (fuego.directory.file). But it does need
to know where the project is located (fuego.project.path). Also, Studio
does not require passwords to connect. But the example code expects a
value in "bpm_password". So it doesn't matter what you put in this
value. It just has to be present.
The J2EE Enterprise connection does need a password. And because I
just hate plain text passwords in text files, the example code expects
the password in the .properties file to be encrypted. I provide a simple
Java class to encrypt a password
(com.floydwalker.crypto.CryptoLauncher). If you plan on connecting to
BPM engines that handle sensitive data, you will of course change this
to suit whatever security policy you have in place.
There are two sets of credentials. "bpm_user"/"bpm_password" are the
PAPI engine connection credentials. "WorkspaceUser"/"WorkspacePassword"
sets the participant login that is used to execute the interactive
There are a few "Workspace..." settings that allow you to specify
where Workspace is located. You can probably just change the hostname
and port on these guys to get them working.
At the very bottom of the .properties file are the form post values
for the interactive forms provided in the BPM project export.
I have been using these techniques frequently to apply
test-driven-development techniques to BPM Process design. It has made it
much easier to find errors and bottlenecks in edge cases. I also like
the convenience of proving that recent changes are viable. Using mocked
servers and stubbing out other interfaces to external systems, it is
easier to isolate whether or not bugs come from BPM or from the external
It is my wish that this knowledge help you to develop better processes, faster. Enjoy.
I have long been a fan of unit testing. Thus, I was full of joy when I
found out that there were PUnit and CUnit objects were made available
back in ALBPM version 6.
My good mood was quickly dashed when I found out that there was not much
in documentation and examples. There is a paragraph or two in the
manual. And the example included with the installation of Studio was
But, over time I eventually learned the ins and outs of leveraging these
objects to my benefit. While they aren't as comprehensive or as
reliable as I would like, they are quite useful.
I will let everyone know what I have learned about these objects here,
so others will not have to go through the pain I went through figuring
PUnit and CUnit -- What they are and What they do.
First of all, let's introduce these objects to those who don't yet know them.
PUnit objects are a special kind of Business Object you can create in
the Component Catalog in ALBPM 6, 6.5, or OBPM 10g Studio.
They allow you to create automated tests of your processes in the style
of JUnit tests. They will even allow you to specify how you want
interactive activities to respond through the use of PUnit tasks.
CUnit objects are another special kind of business object that allow you
to create automated tests of your business objects in the Catalog.
Creating a PUnit Test
Creating a PUnit object is easy. It's
just a matter of choosing the right menu option off of the "New" menu
when you right-click on a "Module" of the Catalog. See the Image below.Creating a PUnit Object
Now creating a test method in the PUnit object is a bit more tricky.
Right-click the PUnit object in the Catalog, select "New" then "Method".
Here is the tricky part. You have to begin the name of the test method
with the word "test". Otherwise, OBPM will not recognize that method as a
test method. So, name the method something like "testProcess" or
"testBetterDocumentation" or something like that.
After you have a test method (you will know because the method icon will
be green instead of blue), you can put special PUnit test object calls
into the method. You can find these easily by using Eclipse's
Ctrl+Spacebar feature to tell you what you can do. The operations and
objects are easy to understand, especially if you know PAPI.
To get you started, I have written a sample test method in this project export you can download below.
Notice the built-in methods "setUp" and "tearDown". These should be
familiar to JUnit veterans. They initiate a PAPI connection/session to
the Studio engine before running a test, and shut it down gracefully
afterwards. You shouldn't need to change the "tearDown" method which is
written for you. But, you may want to change the participant specified
in the "setUp" method. After all, the participant specified here will be
the user context under which the test will be performed, and you
probably won't have a participant "punit" in your Organization settings.
There is also a built-in attribute of type
Fuego.Test.ProcessServiceSession named session included with the PUnit
test object. This is your connection to the Studio engine, and the
object from which most of your interactions with the engine will be
Writing and Running a PUnit Test
The example I have provided shows how you can do the following tasks in a PUnit test.
- Create an instance of a process
- Assign input arguments to a process instance
- Assert that the instance is running
- Get the current activity of a running process
- Get the value of an instance variable (only works for simple data types)
- Assert that a process instance arrives at a certain activity within a certain amount of time
You can do much more with the session object. But these are the
activities I usually do in a Process test. The
Fuego.Test.ProcessServiceSession has some similar methods to the PAPI
ProcessServiceSession. But, the PUnit version is quite different. You
don't have as much flexibility as do with PAPI.
But, you are able to interact with interactive activities. Here is
how you do it. Right-click on the interactive activity you are including
in your process test, select the "PUnit task" menu option.Testing Interactive Activities
Since PUnit tests are automated, and there is no actual user
interaction, you specify how the interactive activity changes the
process instance (when you are running the PUnit test) in the PUnit
task. So if the interactive's output parameter set an instance variable,
you would set the same instance variable in the PUnit task method.
Next, in the PUnit test method, when you are sure the process instance
is waiting at the interactive activity, use the session.activityExecute
method to execute the PUnit task specified.
To run the test method, first start the ALBPM or OBPM 10g engine.
Then right-click the method in the Catalog, and choose "Run Test". The
"Test Results" pane will pop-up and you can watch the test execute.The Test Results Pane
If you are using objects in your Catalog which have been introspected
from jar files as input or output parameters of processes, you will need
to modify Studio/Eclipse's classpath.
The test runner executes the tests in Eclipse's Java environment. So,
while writing the test code, there are no errors (Eclipse knows what is
in the Catalog), and the engine runs fine (in another Tomcat Java
environment which has the jar files deployed with the BPM project), the
test runner will report ClassNotFound.
This is pretty tricky, I solved it by modifying the file
%BEA_HOME%\albpm6.0\studio\eclipse\configuration\config.ini and changed
the osgi.frameworkClassPath value to include my custom jar files in
While this worked for me, occasionally I still received ClassNotFound errors. But these went away when I restarted ALBPM Studio.
Limitations of PUnit Tests
You can get the values of instance variables returned to the test
method. However, this only works reliably for simple or native data
types. Complex datatypes (business objects, and introspected objects)
can be viewed as XML if they are serializable. Otherwise, their values
can't be inspected in the test.
Testing processes which use subprocesses and Split/Join activites can be
difficult. You can't get a subprocess' Id from the parent process. And
targeting a specific thread in a Split/Join group can be difficult.
Those limitations aside, testing Processes with PUnit is a pretty
useful feature. One of the more welcome features is the ability to
change the PUnit test method and run it without having to stop the
engine or redeploy the process.
Writing and Running CUnit Tests
As mentioned earlier CUnit tests can be used in the traditional fashion
to test code written in Business Objects to insure they work as
expected. You create and write CUnit tests in the same way as you do
PUnit tests. You also have to start the names of CUnit test methods with
the word "test".
CUnit tests can also be used to verify or proove that External
Systems (Web Services or Databases, etc.) are returning the expected
responses to your inputs. Or, they can be used to do ad-hoc testing of
Business Object code.
The advantage of CUnit tests to PUnit tests is that you don't need to
have an engine running or a process deployed to run them. But if you are
using introspected jar's in the test code, you will need to change
Eclipse's classpath as shown earlier.
Using a PAPI JUnit Java Test Environment
Sometimes you need to test your BPM project in a J2EE container because
the process just does not behave the same way in Tomcat as it does in
WebLogic Server. To do this, simply write JUnit test cases using PAPI.
Doing this has some advantages and disadvantages to using the PUnit test
- If your subprocess activity has the "Generate Events" option
turned on, the PAPI JUnit test case can pick up a process' subprocess id
and continue testing. You can't do this with PUnit tests.
- You can get the vales of instance variables of complex data types.
- PAPI JUnit test methods can not execute interactive activites in a
process. PUnit tasks are not available ouside of ALBPM / OBPM Studio.
Once written PAPI JUnit tests can be run on both the J2EE engines as well as ALBPM / OBPM Studio engines.
You can download the example below to get you started creating a PAPI JUnit test suite.
When you get to the point where you have hundreds of thousands of
active instances in your engines, minimizing the amount of memory each
instance uses is essential. Insuring you have a good process design at
the beginning will save you problems trying to scale your servers to
support them. This article will give you some ideas how this can be
Most of us software developers are familiar with the venerable
Model/View/Control (MVC) methodology which has greatly increased memory
efficiency, performance, and maintainability in our projects. I have
developed a similar methodolgy for developing BPM projects in OBPM /
ALBPM which has proven to be successful.
Three Kinds of Business Objects
The BPM Component Catalog allows you to introspect many kinds of
objects from disparate resources. While this is very convenient, don't
get into the trap of using those objects as they are imported. This will
usually get you using objects which consume much more memory than you
need. It is better to create new objects which just accomplish what BPM
needs to do, and have methods or helper objects to transfer the
appropriate information between concerned objects.
Sample Business Model Object
If you start to segregating your object, you will end up having three
kinds of objects: Business Model Objects, Process Flow Objects, and
Process View Objects. I will explain what I mean below.Separating the Object Into Useful Portions
Business Model Objects
These are usually the objects which are introspected into the
Component Catalog. They are very detailed an complex. They probably have
relationships with other objects. They represent the knowledge of a
business concept, or at least how those concepts interact with the
Business Model Objects are very useful for interacting with web
services, databases, and other external systems. But for processes
because of their memory requirements. And, they are cumbersome to use
for user interfaces because the data is optimized for retrieval and
storage, not for data entry or display to a human.
Process Flow Objects
The Process Flow Objects contain a subset of the information stored
in the Business Model Objects. The idea is to only have information in
these objects which the process flow will need for decisions and
supporting the process flow.
Process View Objects
These objects also contain a subset of the Business Model Objects.
The focus of the Process View Objects are supporting Screenflows or
Presentations. The data and object structure of these objects is
optimized for easy presentation of user interfaces.
Using Business Model Objects in the Process Flow
So if you take the bait, and use your Business Model Objects as
instance variables in your process flows, here is what happens. First,
there is a big splash as the object is filled in by your back-end
systems. Information is shared between activities by putting in or
changing data in the object's attributes. As the process goes through
its life-cycle, the object grows in memory usage. And, the object is
using memory for the full duration of the lifespan of the process.Using Business Model Objects in the Process Flow
Using the Three Object Types Method in the Process Flow
Business Objects are used in automatic activities to provide data to
the Process Flow and Process View objects. They are then unloaded from
memory.The objects with the smallest footprint (the Process Flow
objects) are in memory the longest.
Using the Three Object Types
Process View objects are loaded with data in order to support
interactive activities. They are only in instance memory long enough to
complete the interactive activity, and send the results to the other
objects. Since these objects are optimized for the screenflows, their
memory usage is reduced.Object Type Relationships
Well That's Nice, But...
Yeah, sometimes you just need to keep large chunks of data around to
avoid performance problems caused by excess chatter (from continually
loading the Business Model Objects). This is when you would use
"separated" instance variables. These are then stored in a database
table and not in direct instance memory.
How to Segment Business Model Objects
First, Business Model Objects can have methods to load and save to
database/webservice/etc. This keeps the complexity of the persistence
method out of the process flows completely.
Segmenting the Objects
Next, Process Flow Objects can have methods for loading their information from the Business Model Objects.
Then, Process View Objects can have methods for loading and saving their information back to the appropriate objects.
I have had several clients now desire to dump their current Web Service
integrations in favor of using JMS in their BPM orchestrations. The
reasons are many, but the main ones have to do with message persistence
and scalability. Messages sent with JMS are queued up and persisted
until the consumer is ready. And, JMS is a little more efficient than
Web Services due to the lack of the SOAP envelope. Whatever your reasons
for wanting to use JMS instead of Web Services in OBPM/ALBPM, this
article should help you a bit.
Not as Simple as it Sounds
OBPM makes it seriously easy to implement Web Service integrations. Just
introspect the WSDL into the catalog and then drag a method from the
catalog to a code-block. The response is usually returned immediately
in the typical HTTP request/response cycle. Plus, all of the objects
involved are usually embedded in the WSDL so everybody knows what they
are talking about.
With JMS, the BPM process has to listen to a JMS Queue with a Global
Automatic activity and manually wait for a response using a Notification
Wait activity for an asynchronous response. Also, if you are doing more
than just simple text JMS messages, all the objects being used will
need to be manually introspected into the catalog, and kept up to date
with what the other side of the JMS conversation is using. The following
diagram shows the differences.
The Message Router
You may be asking why you need two processes for this to work. Well, if
you just have one JMS conversation going on, then you will only need one
process. Something like the following diagram will certainly do the
But, if you have several conversations going on with numerous
processes, you won’t want to have a single JMS queue built for each
conversation in each process. You will probably want to use just a few
JMS queues and route messages to the processes based upon what is in the
messages (such as by object type). Thus you will end up with something
like the following message router process (MRP).
Note: I don’t like using the Process Notification activity to notify
processes. If the process instance to be notified is not found, the
instance of the MRP is aborted. If an error handler is present in the
MRP, this can be avoided, but the engine still logs a Warning error in
the log. Get enough of these, and it can really clutter up the log. I
prefer to use the PBL Notification.Send method in an Automatic
Activity. Then, you have more control over what happens in these
Processing the Notification
A Correlation Set has to be created and initialized by the process
receiving the notification from the message router process (MRP) before
it receives the notification. This is so that the MRP can find the
appropriate process instance by the unique business Id. The MRP does not
need to know about the Correlation Set, it just needs to set the
appropriate unique business Id argument before sending the notification.
The receiving process also needs to handle unexpected problems with the
messaging conversation. Here are a couple of examples. First, to handle
the case where the JMS sender never sends a response, an activity with a
due transition needs to be present to time-out the conversation. Next,
when the sender responds with a message that it couldn’t perform the
desired action, a conditional path is created.
So our simple example from earlier now looks like this.
Whew! That’s a lot of infrastructure to build up just to replace a
simple Web Service request/response. Yeah, JMS may be more efficient,
but its implementation here is much more verbose.
Now that you have a better idea of how to do it, I have a few
implementation patterns you may want to consider in implementing JMS in
JMS Process Pattern 1: Interrupting Interactive Activities
This is a case where a process instance is waiting at an Interactive
Activity, and it needs to move on to the next activity when it receives a
JMS message. Essentially, the process flow is blocked by an Interactive
instead of a Notification Wait activity. This pattern works well when
you want the process instance to appear in the Workspace inbox even
though it is actually waiting for a JMS notification. The user can then
have the opportunity to interact with the process without having to
search for it and use a Grab.
I like this pattern because it increases the visibility of the
process instance in the Workspace. Plus, it gives the user more
information and influence over the instance. The user can have their
view set to sort by instance creation date, and easily see which
instances have been stuck too long waiting on a notification. If we used
Notification Wait’s to block the process, a custom view would need to
be created to cause the instances to be visible.
A Correlation Set is initiated in the Begin activity, and used in the
Notification Wait activity. The latter has the “Allows Interruptions”
option turned on so that when the notification occurs, it will interrupt
the running instance in a way similar to the error handling flows. In
the automatic activity, the PBL sets the predefined variable “action” to
SKIP. Thus, when the notification flow reaches the Compensation
Activity, the process instance will skip the processing of the current
activity (the interactive) and proceed on to the next activity down the
unconditional path towards the “Finalize” activity. The Grab Activity is
there so participants in a more administrative role can easily move the
instance along the path should the notification never arrive.
JMS Process Pattern 2: Manually Moving a Process Instance
The “action=SKIP” strategy only works if the next activity in the
process flow is where you want the instance after the notification
arrives. What do you do if you need more control over where the process
instance goes? That is the question that the following pattern answers.
The diagram of this pattern is exactly like the previous one, except
for the addition is the “Exception Grab” activity. This is a From All/To
All Grab. I call these “Exception Grab’s” because they are usually used
as a method of last resort to fix a process instance that has gone down
the wrong path.
In the notification flow, the Automatic Activity uses a round-about
way to automatically execute the Exception Grab to send the instance to
the activity of choice.
JMS Process Pattern 3: Handling Multiple Asynchronous JMS Responses
Because you can’t execute a grab while in a notification or error flow, a
simple process is created to connect to the engine, find the process,
and execute the grab.
All of the code and the processes are included in the sample project
export that can be downloaded below. Also, this example (in the export)
shows how to manually initiate and finalize the Correlation Set in PBL.
This pattern addresses the case where a single request can cause
multiple JMS responses. And, these responses can arrive in any order.
But the process must wait for all of the messages to be received and so
something with the responses as they arrive.
To achieve this, a business object is created in the catalog to keep a
record of the messages that have arrived. The business object is
implemented as an instance variable. The flow is blocked by an
Interactive Activity only if a message has not been received.
This example shows how to use activity.source to determine where in the
process the instance was at when the notification arrives. It also shows
how instance state can be preserved when multiple asynchronous events
are occurring, and they can occur at any time of the life of the
instance. Boolean flags are used in the business object instance to
insure that the process instance only reacts once to a certain type of
Process Project Export
Note: In the process export all four Automatic Activities utilize the same process method.
All three of these patterns are included in the following Process
Project Export. Also included is the “GrabAndRedirect” process and
demonstrations on how to send a JMS message in PBL.
I have been up to my neck in Java and BPM stuff for a while, but
nothing really exciting. So, since I haven't posted for a while, I
thought I'd clean up an old .NET / WCI portal project I did and tell you
guys about it.
They are really elegant in both implementation and presentation. Once
you load the MooTools library on the page, and add one line of
to the HTML form item you want to validate and BAM! that's it. If you
want it to look pretty, just add a stylesheet. All of the effects are
controlled through styles.
More information about these validators can be found at the mootools.net site.
Note: These user controls were made with a dated version of MooTools.
And, the validators at that time had not yet been integrated into
MooTools. They were still being managed by CNet / Clientcide. If I get
some time, and I think it's interesting enough, I'll update it to the
In The Portal
While these controls work great in a standard ASP.NET web application, they are really designed specifically to run in an Oracle Web Center Interaction (WCI) portal.
Designing portlets for a WCI (aka. Aqualogic User Interaction (ALUI),
aka. Plumtree) can be a bit daunting for junior developers. The Oracle
.NET Application Accellerator makes things a bit easier by capturing
post-backs and updating the portlet in-place. But, if you want to do
fancy AJAX stuff like some of the MooControls do, then things get dicey.
The Microsoft AJAX controls just don't play well with WCI.
The MooControls, combined with the Oracle WebCenter Application
Accelerator for .NET makes creating nice, form-based WCI portlets much
easier. Plus, there are a few tricks it can do to really make the
Don't have a WCI portal? Don't worry; the MooControls work fine in a
regular ASP.NET application. One nice feature of the MooControls is that
it can detect whether or not the application is being hosted in a
portal or not. This makes debugging the UI outside of the portal a
Each MooControl is a derivation of one of the standard .NET
WebControls you are probably already know and love. The next table shows
a list of the MooControls. The names of the controls and their base
class should tell you what they do. If you need more information, check this page for details on each control and their attributes, methods and behaviors.
|MooControl ||ASP.NET Base Control
The MooControls and their ASP.NET Base Class Controls
First, the MooFormValidator control tells ASP.NET how and when to
Consequently, you will need one of these guys on each form you use a
The other "Simple Controls" behave exactly like their base control
counterparts. They even appear in the ASP.NET Toolbox and look nice in
the "Design" and "Split" views of the Visual Studio IDE.
To begin using the validator functions of the MooControls, you just need to specify two attributes:
- The "MooFormValidatorID" needs to contain the Id given to the
MooFormValidator control on the form you want the control to be
asscociated with. (Yes, you can have more than one for a form.)
- Type one or more validator names from this list into the "ValidatorTests" attribute. The validator names should be separated by a singe space.
The MooButton and MooLinkButton have been modified to
insure that the client side MooTools validators validate the controls
before executing their server side event code.
I implemented in the MooControls the idea of dependent controls. What
this means is you can have a set of MooControls that are dependent upon
one MooControl. These dependent controls either appear/disappear,
enable/disable, validate/don't validate based upon the value of the
This is useful for forms where you need additional information if a
user selects a certain radio button or checkbox, or even puts a large
value in a numeric textbox. Of course, you need to provide the test in
side code). See the example project in the download if you want to see
how it's done. It's really not that complicated.
Here is a non-functioning example:
|Have you ever been convicted of a felony?
You don't want the "Please Explain" prompt to appear unless the user
specifies "Yes" in the master MooRadioButtonList. The "Please Explain"
label and associated MooTextBox are in a DIV element that specifies the
container for the dependent controls. When the dependent controls are
not visible, their validators are not active.
The MooControls has a really neat date entry control in the
MooDateTextBox (thanks to MooTools). It has really good date entry,
selection, and validation. Plus, you can specify date range validation.
The MooTextBox control can insure you provide a specific number or a
range in the number of characters entered. If it is a numeric
MooTextBox, you can insure that the number is in a specified range.
These ranges and validators can be added, removed, turned off or on,
validation messages to appear. You can change the validation text if
you want. Custom validators aren't too difficult to create, especially
if you know regular expression syntax (not required) and rudimentary
All of these features are demonstrated and documented in the sample project in the download.
The Complex Controls
The MooGridView control works like the out-of-the-box GridView
conrol. But, it has a few extras added to allow the user to create a
grid of MooControls. This is useful if you need to collect a series of
zero or more instances of a data type. Say you need to have the user
enter the beginning and ending employment dates and the name of the
employer. Each row begins with a checkbox (for row selection), then two
MooDateBox controls and a MooTextBox control, each with their own
validators. At the bottom of the MooGridView is two buttons: "Add" and
"Delete". This allows the user to add and remove rows in the collection.
The MooGridView manages the dynamic creation and removal of the
MooControls contained within the rows of the grid. The developer just
needs to drop the controls in a template row, and bind them to the data
source correctly. An example is in the download.
Next, the MooPopupView is just low-hanging fruit. MooTools provides
what they call a StickyWin control. It is a very customizable modal
DHTML window which can appear and disappear based on user interaction.
Every bit of HTML that appears within the start and end tags of the
MooPopupView control will appear inside a StickyWin window. It will
appear as a modal, draggable window. The rest of the page will be
"greyed-out" and disabled. And, the developer gets to select the buttons
that appear at the bottom of the StickyWin. See the download project
for an example and documentation.
The WCI Portal Controls
If you are going to use the MooControls in a WCI portlet, the MooPT
control should be on every page that is hosted in a portlet. First, it
detects whether or not the browser request is coming from the portal, or
straight from the browser, then changes the script path accordingly.
This is wonderful if you need to test your portlet outside of the portal
from the Image Server. Otherwise, it will look for the files in the web
The second thing it does is drop the "pt:token" tag on the page, so
you can make all of the names and Id's in the portlet page unique.
The MooPTImage tag behaves like the out-of-the-box Image control when
the page request comes directly from the browser. But, if it comes from
the portal's gateway, it will put the "pt://images/" prefix in front of
the image URL. This insures that your images are served from the Image
Server when the portlet is deployed. And, if you want to test your page
outside of the portal, you don't need to change the URL's of your
The last of the portal MooControls is the MooPopupPortletContainer
control. This control is also the most complex of them all. Setting it
up requires a bit of orchestration. But once operational, the effect can
be quite cool.
Here is the idea. You have one portlet on a portal page. In that
portlet, you have a link or button that when pressed causes a form to
appear in a modal style like the MooTools StickyWin or MooPopupView
control. The difference is, post-backs occur in-place inside the modal
window. When finished, the user clicks a button on the form, it
posts-back to save the form data, then the modal window goes away and
sends a result back to the parent portlet.
This is done by having a portlet page with the
MooPopupPortletContainer in it. The control causes the portlet to
called, the invisible portlet loads an ASPX page, becomes visible and
changes it's dimensions so that it becomes a MooTools StickyWin. The
Oracle WebCenter Application Accelerator for .NET handles the rest. When
MooPopupPortletContainer returns to its original, invisible state.
I am not actively maintaining this library. But, I did have a great
deal of fun developing it. I really should update them to support some
of the great things in the latest version of MooTools. Perhaps if I get
really bored one day… I thought it was a really good idea. Especially
since I really hate the out-of-the-box validators that Microsoft gave us
with ASP.NET. And, the MooTools validators are very pretty.
I have included a demonstration project with the download. It not
only shows you how to use the controls, but also contains a bit of
documentation as well. Here they are, for your enjoyment:
The download links are below. The WCI version (requires the Oracle WebCenter Interaction Development Kit (IDK) and the Oracle WebCenter Application Accelerator for .NET).
A long time ago, when I was supporting various Java applications I
didn't write, I was freqently encountering the dreaded ClassNotFound
exception. (Haven't we all.)
I had just finished playing with FWZipLib.dll wrapper around the Info
Zip libraries. So, I quickly wrote this utility to scan a folder of
jar's (which are just zip files anyway) for a class provided on the
command line. It proved so useful that I still use it occasionally.
It is a simple C# console application. To use it, here is the syntax:
FindClass ClassNameToFind c:\SomeFolder\SomeJavaApp\lib
The utility will then list all of the jar files in the folder. Then,
it will tell you when it finds the class file in one of the jar's.
However useful it may be, I never really updated it with any fancy features. There are other utilities which other jar-heads use for this feature. But if you like a simple, and free, utility to find those elusive classes, you can download it below.
Had enough with the portal tools? Well, here is a simple tool I made a long time ago someone may find useful.
The FWZipLib is a C++ DLL wrapper around the venerable Info Zip
library for Windows. This makes this library accesible to .NET
applications. It can do all of the standard operations on zip files:
compression, decompression, listing of zip file contents, password
protection, etc. Basically, just about anything the Info Zip library can
Hold On a Minute
There are a couple of things you must know before using this library.
First, there are much better libraries out there for .NET. (Particularly the SharpZipLib.)
Second, I really don't think that it is all that thread-safe. (I haven't
really tested it.) If you need it for small one-off utility programs,
console or forms applications it will probably work fine.
To use it in your average C# program, just reference the FWZipLib.dll.
Then, make sure you copy the Zip.dll and UnZip.dll files into your bin
folder. These are the Info Zip libraries which actually do the work. The
FWZipLib.dll provides the interface to the Info Zip dll's.
The download below includes the source code and the compiled binary for the FWZipLib.dll. It also contains a VB.NET test client to show you how the syntax works.
One day I wanted to show some images in a WCI / ALUI portal and was
astonished to find out that there is no OOTB facility for this. Of
course, I could just host an HTML page in a portlet, or just use a
Publisher announcement portlet. But then I couldn't open my compiler and
Since a slideshow portlet would need a file listing for it to do it's
dance in the browser, I figured I could easily extend my fabulous File
So here it is. On the back end (administration), it works just like
the File Share portlet. You point it to a UNC share somewhere the Remote
Server can reach, and give it the login credentials. Put the images in
that folder, and the Slideshow Portlet will do the rest. If you create a
sub-folder called "thumbnails" which has smaller images that have the
same filenames as the larger images in the parent folder, Slideshow will
use those in the portlet (for faster rendering), but deliver the larger
ones when you click on the image.The Slideshow Portlet in Action
FeaturesSlideshow Portlet Index Strip
The JonDesign Smooth Gallery has some neat features (which can be
tweaked as options to the JS object). It has a slide-out index strip of
all the images in the show for easy navigation to a particular image. It
has a translucent image information which shows momentarily at the
bottom of the image. Also, there are arrows to the left and right to go
to the adjacent images in the show.
Galleries Slideshow Portlet Galleries
If there are sub-folders (not named "thumbnails") at the folder you
point the portlet to, the Slideshow Portlet figures you want to enable
the galleries feature. This will show each folder (even the root folder
if it has images in it) as a block with its name and a sample picture in
it so you can choose which folder you wish to look at.
Too Good to be True?
Put a File Share portlet on the same community page. Point it to the
same folder where the images are. And, set it's security so that only
the community owner can see it. Now you've got an easy way to manage the
images through the portal.
Well, this is just an experiment. There are a few glitches which need to
be worked out. Namely, it doesn't work in some versions of IE. But, if
What I have here is an attempt at simplicity. The old Plumtree portal
(now called Web Center Interaction) provides a wealth of features. But,
the implementation of these features can be a bit daunting for the
audience the product was targeted.
I have been on many sites where the Knowledge Directory feature to
share company files in a secure manner has been severely underutilized.
The reasons are many from the highly technical to completely
The WCI File Share Portlet I wrote does not implement many of
the great features of the Knowledge Directory. What it does do, is allow
a community administrator to easily set up a file share through the WCI
What the User Sees
Here is a look at how it appears on the community or MyPage if the user has only View or Select rights to the portlet.
And, if the user clicks the folder, he gets breadcrumbs, and folder tree navigation.
And here is how it looks to a user who has Edit rights.
Where the Files are Stored
The folder structure is stored in a Windows folder somewhere where a
user and the portlet server can reach via UNC. The folder is protected
by domain credentials. The portlet will impersonate the domain user to
access and maintain the file share.
If the portlet user has Admin rights to the portlet, they will be able to access the administrative preferences as seen here.
As you see, here is where the path to the shared folder is specified,
along with the domain credentials which the portlet will need to access
it. Note: The portlet (remote) server will need to have access (a
network path) to the file share in order for this to work.
There is a link which allows the administrator to specify which WCI
group will have access to the toolbar functions shown in the edit-mode
For narrow columns, there is a couple of checkboxes which will turn off a couple of columns. This makes the portlet fit better.
Because the target of the File Share portlet is known only to the
portlet itself, the portal can't index the documents like it can with
the Knowledge Directory. This severely limits some of the main functions
of the whole portal idea. Saved-search portlets will not be able to
return results from the targeted files.
However, you can point an NT file crawler to the same folder that the
File Share portlet targets, and you have a handy way of pumping files
into the Knowledge Directory.
Try It Out
The File Share Portlet was implemented using C#.NET v2.0 and the .NET
Application Acellerator (used to be called the Plumtree .NET Web
Controls). This allows in-place refreshing of the portlet when the web
form does a "post-back".
If you are adventurous and want to try it out on your lab portal to see how it works, click on the link below.