the world of model driven development

Archive for the ‘Mendix’ Category

Enumerations: The Mapping’s Revenge

In the periods of yore, we great pioneers of Mendix waded deep into the murky depth of the modeler handling web service mappings after web service mappings. From these times came an enumeration mapper of inspired thought: Keeping Enumerations as Entities

As you can see from this article it allowed us to get away from pesky manual comparison microflows of strings to enumerations. Unfortunately this methodology has its drawbacks, first of all the enumerations were stored in the database which implies constant database queries when mapping. Now if we could only get away from this…

Well good news! today we have come with more gifts for you intrepid modelers: An alternate method for determining an enumeration from a string.

So how do we do it. Well firstly lets look at the problem, we have a few situations (xml, webservices) where we can receive a string we need to map into an enumeration. So what we are looking for is a method to input the string and retrieve the enumeration value. Now java actions and microflows only return specific enumerations, so unfortunately we cannot get away from this. However we can make an easy way to retrieve this.

So in the new method we create a non persistent entity we call “EnumerationMap”, to which we add the enumerations we wish to map.

The ext step is to create a java action which given a name for the enumeration and its value (caption or value but case sensative) can return the EnumerationMap entity populated with the found value. We can then build a nice and simple microflow which returns the enumeration as required (see below). and voila, now only one annoying entity(although non persistent) in the model and no db queries required.

To see the magic sauce inside the java action please download the example project package here (v 4.0.1)

Please let me know if there are any thoughts or queries

To build or buy, or better yet…

The “build or buy” tradeoff has been described as having “Shakespearean proportions”. It is typically the starting point for solutions to business problems that can be solved with technology.  Due to the increased rate of change and global competition, this question is even more relevant.

To save you research time, we will relay the short and simple answer to this classic question: To buy or rent when you seek the solution to a commodity business process and to build when your solution touches the core differentiators of your organisation.

If only it was that simple. The tradeoffs have become more complex. The proliferation of vendors that saturate the market with general purpose systems makes the choice a complex affair.  Add to this the recent addition of Cloud and SaaS service providers that offer a plethora of modules and a new risk/reward model that removes hardware and hosting issues but adds data ownership, security and an dependance on the Internet to the considerations. Then the decision matrix for a “build or buy” choice becomes an operations research decisioning problem: never mind the normal softer issues like internal politics and ego’s.

The solution is to formally add a third option to the “build or buy” set, namely that of “assemble”.  We then have the following options, namely to buy/rent, to build and to assemble.  SOA provides the technical framework to do this, but does not give us an agile and efficient way to assemble these components into a coherent whole to produce business value. By formally adding the third option, most people will argue that this is not new and has always been there implicitly – the combination of a buy plus a build to extend what is purchased or rented.  However, the introduction of a formal “assemble” option adds a new dimension, rather than just a combination of the two.

Traditionally for COTS systems, a buy-build would be required to extend the functionality for the 30% they do not provide.  The ERP wars of the 90’s showed that this type of hacking resulted in terrible delivery cycles that went over budget, under-delivered and caused huge maintenance nightmares. The lesson: Do not extend off-the-shelf packages where possible, upgrades become a nightmare and the buy value proposition is quickly invalidated. This generally applies to COTS and open source components and systems.

Back to Basics

Before delving into the ”assemble” option as a formal construct, we have to emphasise that it relies on having the “basics” in place, namely:

  • Proper business ownership and a formalised product owner team if more than one person has the product owner role (not ideal).
  • Have a solid Agile development framework and software development life cycle process management in place.
  • Knowledgable people that understand the business and like what they do.
  • Have the proper investment focus by investing where you gain incremental revenue or the competitive advantage.
  • Analyse your legacy systems and clearly identify what still has business value.
  • Business processes have to defined the right way and must focus on the customer rather than inward-focussed business processes.

Infrastructure and tools are normally a simple buy choice, as is using open source components like databases, development, operations and programming languages.

The build-buy-assemble choice still has the same basic decision points, namely cost, time to market, politics, architecture, skill sets, and strategic value, with the old rule of thumb to buy to the maximum extent possible to cut costs and free up resources for what REALLY delivers the unique edge.

Finally, the industry is changing and we believe the following facts give more scope for a formalised Assemble option:

  • Vendors are saturating the market with general-purpose systems.
  • SaaS will put legacy systems under pressure by offering software with shorter time to market, lower maintenance costs and lower costs as SaaS typically lets customers pick and pay for functionality in modular fashion.
  • The major enterprise software vendors like Oracle and SAP are moving toward component-based models.
  • Traditional large scale development with .NET and Java is as risky today as it was a few years ago.

The third option

So far we have established the basic criteria and fact sets and can now introduce the third option of system assembly as the better option. Buy components where possible and create a dynamic layer that binds these components into a coherent business delivery model by using model driven technologies. Modern model driven development offers quick time to market and excels at consuming integration points and then adding unique business logic, entities, validators and excellent performance, security, re-usability and maintenance.

SOA provides the technical framework for easily integrating systems. As long as legacy and component based systems provide the correct granularity of services, we can quickly assemble new product delivery systems that leverage and extend them without hacking source systems by using model driven assembly technologies. The important factor here is to choose the correct model driven framework that offers checked and validated logic and can easily consume services and expose its extension points as services.

Modern model driven development specialises in producing reusable components with extension points through what is called a domain specific language (DSL).  We can then augment, extend and aggregate systems to produce business value quickly by using the DSL.  There are currently two categories supported by commercial model driven development vendors that provide the raw material to assemble solutions that are agile, quick and efficient.

The model-driven market is developing quickly and vendors like Mendix, Talend, FICO and Microsoft (to name a few) are delivering new platforms and development tools that makes the ”assemble” option an excellent formal choice.  This is supported with training, certification and support options to offer a sustainable development and delivery model in addition to the traditional development environments like .NET and Java.

The first category of model-driven development platforms focuses on creating new functionality from scratch and offers a compelling alternative to .NET and Java.  In this space Mendix is one of the leading vendors.

The second category is more specialised in verticals. For instance the various BPM vendors like Bonita offer excellent BPM process management tools. Another example of a model driven vertical is FICO’s Blaze that provides model-based decisioning tools, or Talend’s ETL and ESB tool sets.

To conclude, the “build or buy” choice typically included the buy-extend or the build-from-scratch options.  The assemble option replaces the buy-extend with buy-assemble. Here the emphasis is on adding the missing functionality of the purchased systems by using the most efficient technology to add the business value quickly and in a platform designed to handle change.  Model driven platforms are designed with change in mind,  and offers checks to validate dependent systems’ service consistency with the business process, business entities and state.

The build option can also be replaced with assemble as we could use the model driven platform to create the solution from scratch that ONLY implements the function points and business processes required to offer tailor made business value.  We will detail the requirements to look for in a good model driven framework in following articles.

Geeking out on a Dark Knight, Artificial Intelligence and Mendix

Mendix is a fantastic business tool catered to provide business with rapid high quality applications. Some time ago someone made a joke about writing an artificial intelligence application, specifically  a genetic algorithm (GA), in Mendix and for some reason I couldn’t let the idea go so I gave myself a time boxed period to model one. I learned a few interesting things about Mendix which I summarize in the last section and have defined more of how I built the GA in the inbetween section.

In order to understand more we need to define some GA terms:

1. An Organism as: A solution contains the blue print to the answer to the problem trying to be solved.

2. A Chromosome as: A building block of the solution contained within the organism.

3. A Population as: A group of organisms, each containing their own specific answer to the problem. Some solutions are better than others and therefore the organisms with the better solutions are deemed to be fitter  (more valuable) than others. When the GA is initialized the first populated of organisms gets generated and each chromosome randomly selected, there after each generation is bread by crossing the chromosomes from parent populations to create the next generation. Since organisms are created from parents (2 organisms being joined to create a child organism and in doing so mixing the parents chromosomes into the single child organism) the population is refreshed each generation with a set of organisms.

The Rules Of Game

The purpose of the game is to, by using an endless supply of knight chess pieces, try to find a way to fill each block of a chess board. Practically, and in genetic algorithm language, that meant that each organism gets a random starting point as part of the initial population generation. From that point they can decide which block the knight must move to next (moving in the special way that only knights can). The program allows the knights to move until there are no further free blocks to move to as per their chromosome move choices they’ve made.

Mendix GA Chess Board

Example of one of the organisms chess boards

The Organism, Fitness and Generations

Organisms that can place more knights on the board are seen to have within themselves a more optimized answer to the problem and are defined as fitter than others. Once the fitness for each organism is calculated then the fittest organisms breed and produce the next generation. The organisms breed by randomly mixing their chromosomes together.

In this game the chromosome is a set of decision choices about where to move next. For example a chromosome will be a decision choice to firstly move 2 places up and one place to the right after the last taken block.

I represented each potential move as a enumeration and saved that in an organism entity.

Mendix Genetic Algorithm Entity Model

Genetic Algorithm Entity Model

After all moves had been played out the fitness was calculated by a simple retrieve on open board blocks linked to the organism. After the organisms are scored then their enumeration chromosomes mixed together to form the next generation organism.

The breeding microflow on a Mendix GA

The breeding microflow on a Mendix GA

The Output

The outputs show that after running the algorithm for 6 generations or more than the best solution flattens out at about 76% optimization. To progress to a further optimized solution would probably require the addition of mutated organisms, something I might leave for a time box another day.

The Lessons

I learned a lot in doing a GA with Mendix. Some of the lessons are business related, some of the lessons were things I just picked up during the development of the GA:

1. Mendix promotes a configuration approach

Mendix makes it really easy to add your settings and configuration for running the application in an entity format. Things like setting the number of generations you’d like the program to run for, number of blocks on the chess board and number of organisms per generation are really easy to set. A configuration approach immediately puts a Product Owner/Business in the driving seat allowing them to be responsible for the application solution. When running GA’s you need to tweet with these types of settings to figure out how to optimize things. I found that Mendix naturally lends itself to a configurable approach without having to compile and run each time I wanted to tweak something in the algorithm.

2. The Visual Nature of Microflows

The visual nature of Microflows is an added benefit to creating GA’s with Mendix. The Microflow which calculates where the next knight would move to was visually created as such.

The Microflow to calculate where to move next

For example the Microflow to move the next knight left 2 blocks and up 1 block was called “West Up” and could be found exactly there: on the left hand side and up.

3. Traditional looping is difficult, but visually easy to interpret

Creating loops in Microflows are interesting to get your mind around. It doesn’t feel natural but it is visually really easy to visually interpret. In a code like fashion looping within a Microflow feels heavy but achievable.

4. Getting my hands on the Data was Cha Ching

I’ve created GA’s in Object Orientated languages before. Typically it takes too much time to persist the data generated by the algorithm so debugging and problem solving is normally done via drudging through app created log files or if you do output data to a database you then have to peruse the data via SQL. The Mendix entities persist with no effort at all so I had no problem getting my hands into the data to source problems because it was so easy to do so especially since it was effortless to create custom screens from which point I could both see and change data while tweaking the algorithm during run time. The result was shorter problem solving time periods and in a time boxed scenario saving time is gold!

5. Good fun

It’s great to do a project like this in Mendix. It’s also good to experience practices like working from the entity model hold true even when dealing with traditionally “Computer Science” type lab problems. As always its great to experience how easy it is in Mendix so create a good quality solution in such a short space of time.

The Knight without shining armour or a nice Italian Suite?

Foreword: Mendix isn’t just software. Its a different approach and a different way of thinking. Mendix consultants dress differently, use different tools and think more abstract about problems. This article was originally written for another tech site but was inspired by the “Mendix Philosophy” and deserves a spot here. HG

IT was created to extend human abilities to where we could not reach and to automate boring and repetitive tasks. IT frees humans to do the things they are good at. Like thinking, and adaptation. Humans are built to like change and dislike the repetitive. For this reason, companies spend lots of money to figure out how to keep the masses’ attention and how to sell better, recruit better and capture more. Yet the systems they build are inherently static, cast in IT concrete of data schemas, work flow, form and data validation and the last bright developer’s wonderful framework that promised flexibility. Promised to deliver a configure-once and “get your new change quickly” framework. The knight without shining armour.

Those that have been through this and hurt by this go out and buy the new knight in shining heavy armour that suits 70-90% and hope to teach the new gentleman their dialect and court manners. He just might end up stealing the court’s ladies hearts for a time. However, it soon becomes apparent that he was trained in older tactics which do not deliver the punch his sponsors intended and his adoption of the house’s colours and insignia is shallow to say the least. He needs diplomacy and agility that his rigid strictures do not cater for. Still useful for the heavy hitting punch he can deliver (if his heavy maintenance is paid), he needs the services of the guys in their tailored italian suits to form new alliances and capture the attention of the up-and-coming masses who might join the ranks. Lets meet the new knights who like change and can switch diplomatic ties that win the war without a battle and bring prosperity with a handshake. Before we do however, lets first describe the knight without shining armour.

Current development methodologies are mostly centred around Object Oriented development, which itself builds on Object Oriented Analysis and Design where problems are decomposed into Nouns with related Verbs. The decompositions will then be strung together in different layers to provide a solution to the problem. The knights that wield the OO armour tend to be optimised for delivering the core punch required in the ranks. Their manoeuvrability is low, because the Nouns and Verbs are tightly coupled into layers and have to offer heavy protection to maintain the ranks and flank. Use them as components in the battle to deliver a focussed punch. They are not agile and do not change their method of warfare easily as new tactics are required. They often promise to deliver re-usable weapons that will work beyond the current Fort that needs to be conquered. Sadly, the lead Knight gets bored when the Fort is scaled and the other knights are not interested in maintaining his siege weapons with its flaws. Besides, the next Castle will require a different tool set to win, and the last knights’ light armor does not even shine anymore. The cost of maintenance has become to high and the King and Lords are at odds with the lost flexibility they were promised.

The King needs the new knights that operate at a higher level. They do not need to wear armor and wield heavy weaponry to define themselves but they can don the shining light armor should the need arise. Yet, that is not their first line of defense nor offense. They know how to move on higher planes of abstraction, to listen to the deal makers of the day, and they are trained to seek to understand before they swing the sword. They like to win the war without a battle. And to deliver business value to the Lords and King. They have the ability to steer the armies and listen to the feedback of the intelligence officers and logistics. In short, they have a different mind set that helps the kingdom to prosper and use the current assets to a maximum and to change tactics quickly. Meet the knights in italian suits. Meet the new approach that still leverages the older established weapons and their doctrines but builds on that with a focus to listen, understand and deliver value rather than a crushing blow.

If the older knights represent the established development environments like Java and .NET, and the hard hitting siege weapons represent the ERP systems, the knights in italian suits represent those that wield new technologies AND methods to deliver value. Notably, model driven development technologies are designed to leverage the existing tool chains but to quickly assemble solutions based on their business user’s and client’s requirements. Model Driven Development will continue to deliver new frameworks that allow a new breed of expert to assemble their solutions by focussing more on the solution than the scaffolding. They are trained to listen to business and understand the business value they must deliver.

Model Driven Development frameworks will be sold as a platform as a service, it will be marketed under various names to appeal to the non-technical Lords to deliver them from older inflexible systems while retaining the best elements of the old. Otherwise, the older knights will feel threatened and never allow them to enter. Set a watch on the wall, and look out for them. They might be impostors, or just may deliver the city without a war. It is worth a try.

Try model driven development before the other cities flourish and attract all the trade.

Adapted from a true story of an old knight that donned a new italian suite (or even better: a Dutch suite called Mendix)

Epic Mendix 4 Release

May the 4th be with you came and went, and though it seemed to be a fitting date for the announcement of the next version of the Mendix modeler, it came today none the sooner and promises to be yet another smashing release.

We were provisioned with a courtesy copy of the beta, and it is absolutely packed with features. It hallmarks an array of productivity enhancements and new technology, including the ability to develop mobile-based web forms. Its one of the major highlights of the new release but in no way overshadows the countless other tweaks, enhancements and new features included with the new modeler. We find the new additions very welcome and timely, since we’re already benefitting by it in a big way.

It’s epic feature discovery and has the same kind of excitement that accompanies a new release of a Star Wars episode; for those who are fans. We won’t be covering everything in a single post, its too much to cover, so we will attempt to uncover some, if not all of the new features, in the weeks to come.

Grab some popcorn and enjoy our little fan “film introduction” we made for the Beta and go grab the Modeler and see what you can find out!

Mendix Spring 2012 Release

Today was a lot like when the Apple Online Store is down and the famous yellow “We’ll be Back Soon” sticky gets some web-time. It was a morning of click refresh until at last the site was back up.

After snapping out of the hypnotic effect that the new Mendix website had on us, we managed to log in and notice a huge overhaul of pretty much everything. The Public Facing website’s change is welcome. It looks inspiring and takes the game up a notch. Its not just a pretty face though; there are a lot of substantial changes inside too.

It boasts a unified platform, integrating Sprintr™ and the cloud portal into 1 unified experience. The forum still seems to be a separate entity though, but all the other portal bits and pieces are being fused together. The first thing that caught my eye after I logged in was an IM chat widget in the right bottom corner of the portal and an improvement to the layout. It feels more like an app now and looking for “the old stuff” is feels a bit like finding easter eggs.

Most importantly is the new platform. Mendix 4 has a lot of new offerings which we’ll introduce a bit later today.

Here are some of the highlights in the new release:

  • Mobile for the Enterprise
  • One Platform for All
  • Social Productivity
  • Enterprise Integration
  • App Mash-up
  • Improved Performance
  • Non-persisting entities
  • Overall enhancements, tweaks and improvements

Click here to view the official release notes.

Spring is in the air (if you live in the northern hemisphere anyway)

This morning I tried to acquire a Mendix license for a project we are working on, when curiously enough the website had this message displayed. I wonder whats in store?

I know I’ll be refreshing my page the whole day, or I’ll just use that monitoring tool I wrote in Mendix to email me when they go live, or see if Mendix emails an announcement before that.

Enumerations as Entities

Enumerations within most normal languages are annoying to maintain and build logic off of. In most cases I have seen String variables or manually mapped integers being used. My experiences within the system analysis, database and business intelligence combination always led me to using database table entries with business logic attributes, although convincing the developers of their benefit was not always an easy battle. But, more on that transition in another post. So in comes Mendix with a more formal, easier-to-combine into business enumeration. But as we all use them on a daily basis (or at least should be) I won’t go into diatribes about their benefits, proper usages and so on. Instead this article will revolve around a Java function entity combination we use to get around the two issues we have had with enumerations.

The first common issue we faced was associating attributes with an enumeration. Creating an entity with the enumeration as a unique attribute solves the issue. However, the problem comes in maintaining this entity when creating new enumeration values particularly across multiple environments.

The second issue we are commonly faced with is mapping web service string results into enumerations (A nightmare for anyone who has ever tried). Creating a microflow to map these manually is a terrible solution but unfortunately the one we tend towards. Not only does it create clunky technical microflows but it also creates maintenance issues.

So what do we do make our lives simpler? Well fortunately the team from Mendix have provided the reflection module a perfect spot to get inspiration from. So using this starting point we construct a Java function to be called during the start-up microflows which using code inspired liberally from the reflection module runs through an enumeration set and populates an entity and voila we have a self-building entity to run our business logic and web service mappings off of.

Now words are great but let’s not recreate work for everyone and get to an example. So here we have a module (I will work on adding it to the store soon) EnumEntityBuilder with a simple entity FruitEnumEntity add it to a play project and link the FruitEnumEntity-Overview form from the navigation. I have provided 3 buttons to play with “Build”, “Clear” and “Find a fruit”. Build runs the Java function which populates the enumeration entity, the grid on the page will provide them. The second function clears the entities so knock yourself out adding new and removing new fruits. “Find a fruit” pops up a text search dialog to emulate a web service mapping to find that elusive enumeration.

Well I hope this can help you in future projects to save time and build more robust logic. And once more here is that example module: EnumEntityBuilder.mpk.

Feeling creative?

So who says engineers can’t be creative? Firstly, its a fallacy that only artists are creative, as the basis of the word creative implies creating things. And we create stuff daily, and we need to be creative at modeling effective and elegant solutions.

But lets stop splitting the proverbial brush-hairs and do something arty. The guys over in Rotterdam are asking for beautifully crafted microflows. Submit yours now!

Test Driven Modelling

If you come from a hard core development background one of the stranger concepts to experience is enterprise development (through modeling) with no unit test coverage strategy. I am not an advocate for smashing a unit test approach into a model driven development strategy but I do think that there are instances, when modeling more complex logic, where a test driven approach is both practical and needed. Additionally this approach works well when many modellers are working on common and re-used microflows where changes to a single microflow could effect a large portion of the application being developed.

To illustrate this point I have created a simple test module to house my test Microflows for some complex logic. The module contains a screen which outputs the testcase results.

There are 2 types of microflows being discussed below, the “test” microflows which handles the executable logic for the test itself. The “targeted” microflow which is the microflow containing business logic which is being tested.

The basics of creating a test driven approach can be summed up in 3 steps:

1. Create a Construct Microflow

This Microflow essentially removes all previous data out the database and populates the database with the entities needed to execute target microflows under certain conditions. Assert microflows will then ensure that the logic executed within the target microflow executes correctly given the controlled data set created in this step.

Microflow to construct test data

2. Test Microflow

A single test microflow called “test all” is created where other “assert” microflows are called from. Each “assert” microflow calls their respective targeted microflows. After calling the targeted microflow containing logic being tested the assert microflow then either retrieves the relevant entities that have changed from the microflow being tested or retrieves the returned object/variable the microflow being tested returns.

Given the controlled data created in step 1 the assert microflow then determines if the logic has executed correctly or not. The assert microflow then returns a true or false to the test microflow which then logs the result in a string variable called “result”.

Test All Microflow Calls Assert Microflows

3. Output to Screen

Lastly the result string is exported to the screen via the Microflow Label widget which executes our original “test all” microflow starting the testing process. Modellers can see the results on the screen using this method, instead of automation testing or navigating through the application, to execute complex logic in order to determine if all the logic scenarios are correctly covered.

Results to the Testing

In the example above I can see that my PO Changes and Order Place logic has failed and therefore needs attention. In a team environment I would expect modellers to execute these tests before merging code back into source in order to ensure that the source code is always accurate.

Design a site like this with WordPress.com
Get started