Category Archives: Mule 4

Mule 4 Mulesoft Tutorial

Mask | Masking in Mulesoft Using Custom Function

Published by:

Masking in Mule 4| In this tutorial will see how we can mask or encrypt json / xml fields in Mule 4, while printing logs or hiding PII data.

Mule too have an inbuild mask function to support masking (Link), but it has couple of drawbacks.

  1. With mule mask function you can mask only 1 field at a time. For masking multiple fields, you need to write same function multiple times, which makes it useless in most of the scenarios.
  2. With this mule mask function, you can mask / encrypt the field value, but then you won’t be able to get the original value back.

With the given tutorial you would be able to overcome both the above problems by writing simple dataweave script.

%dw 2.0
fun mask(content, y) = 
    if(typeOf(content)  ~= "Object")
    (content mapObject (value, key) -> {(    
        if((y contains ((key) as String)) and (typeOf(value) ~= 'String' or typeOf(value) ~= 'Boolean' or typeOf(value) ~= 'Number'))
            ((key): mask( ("*****"),y) )
        else
            ((key): mask(value,y))     
    )})
else if(typeOf(content)  ~= "Array") 
    (content map mask($,y))
else
    content

In the above dataweave script, function excepts 2 input param:

  1. content -> Payload (JSON/XML)
  2. Y -> Fields needed to be encrypted (Array)

How above masking script works:-

This function checks the payload type

  • If its Object, then loop through each field in that object and mask/encrypt fields that matches with field name defined in 2nd param Y and fields that are String, Boolean or Number.
  • If its an Array then loop further inside array using same calling function.

Below is the screenshot of the script in action:

There might be chance, where you would need to use masking script to encrypt various fields again and again in various dataweave transformation or while logging. Rather than writing whole script again; one of best way is to create an importable dataweave class that can be externalized and used easily in every transformation needed in a single line.

You can do this be creating below dwl file in “src/main/resources/modules/” Mule project.

DWL filename – “maskFields.dwl

%dw 2.0
var maskKeyValue = (function, x = [], y = []) -> function (x, read(y default "[]","application/json"))
fun mask(content, y) = 
    if(typeOf(content)  ~= "Object")
    (content mapObject (value, key) -> {(    
        if((y contains ((key) as String)) and (typeOf(value) ~= 'String' or typeOf(value) ~= 'Boolean' or typeOf(value) ~= 'Number'))
            ((key): mask( ("*****"),y) )
        else
            ((key): mask(value,y))     
    )})
else if(typeOf(content)  ~= "Array") 
    (content map mask($,y))
else
    content

And now in your xml code you can call this dwl as below –

%dw 2.0
import modules::maskJsonField
output application/json
---
maskFields::maskKeyValue(maskFields::mask, payload, ["name","age","street"])

or

Using prop file storing fields to be mask.

%dw 2.0
import modules::maskJsonField
output application/json
---
maskFields::maskKeyValue(maskFields::mask, payload, Mule::p("fieldsToMask") default "[]")

dev.yml (property file) –
fieldsToMask : [\”NAME\”, \”age\”, \”street\”]

Mule 4 Mulesoft Tutorial

Salesforce – Job Info, Batch Info, Batch Result

Published by:

In our previous tutorial “CREATE BULK JOB SALESFORCE CONNECTOR” we covered on creating bulk jobs in salesforce via mule 4. In this tutorial to fetch the details related to the job created, like how many records/batches failed, successful records/batches and its number  or current status of the job; we will be using salesforce connector components provided by Mule.

We will be covering following salesforce connector in Mule 4:

  1. Job Info
  2. Batch Info List
  3. Batch Info
  4. Batch result stream
  5. Batch result

 

Job Info


Salesforce Job Info connector is used to get the details for a particular job that has been created in salesforce. This operation enables you to track the execution status.

Parameter

On successful execution of the “job info” below in the output:

Configuration –

Output –

 

Batch Info List


Salesforce Batch Info List connector get information about all batches in a job.

Parameter

On successful execution of the “job info” below in the output:

Configuration –

Output –

 

Batch Info


Salesforce Batch Info connector get information about a particular batch inside a job.

Parameter

Batch Info Parameter should contain Job Id and Batch Id for which details needs to be fetched.

On successful execution of the “job info” below in the output:

Configuration –
We will be sending JobId and id (batch Id) to Batch Info, to retrieve batch details.

Output –

 

Batch result


Salesforce Batch result connector get the result of the records processed inside a particular batch.

Parameter

Batch To Retrieve Parameter should contain Job Id and Batch Id for which details needs to be fetched.

On successful execution of the “job info” below in the output:

Configuration –
We will be sending JobId and id (batch Id) to Batch result, to retrieve batch result.

Output –

 

Batch Result Stream


Salesforce Batch Result Stream connector get the result of the records processed inside a particular batch. Best used when there are huge records result to be pulled.

Parameter

Batch To Retrieve Parameter should contain Job Id and Batch Id for which details needs to be fetched.
Streaming Strategy can store data in Memory with “Repeatable In Memory Stream” Config and stores data in file with “Repeatable File Store Stream”

On successful execution of the “job info” below in the output:

Configuration –
We will be sending JobId and id (batch Id) to Batch Result Stream, to retrieve batch result.

Output –

 

Download Mule Project for this tutorial

 

Official Mule 4 documentation on Jobs and Batch. Link.
Also refer to Bulk API Guide on Salesforce. Link

 

Mule 4 Mulesoft Basics Mulesoft Tutorial

Variables in Mule 4

Published by:

Variable in Mule 4


In this Variable in Mule 4 tutorial we will look how we can create and use mule variable in Mule 4, and how it is different from Mule 3 and Mule 4.

In Mule 3 we had Flow variables, Session variables and record variable to store the data inside mule flow. But now in Mule 4 this has been changed; session variable and record variable has been removed and there is only Flow Variable.

As in Mule 3, Flow Variable in Mule 4 value is lost even when the flow crosses the transport barrier.
Session variable has been completely removed in Mule 4.

In Mule 4, flow variables have been enhanced to work efficiently during batch processing, just like the record variables. Flow variables created in batch steps are now automatically tied to the processing record and stays with it throughout the processing phase. No longer record variables are needed.
Continue reading

Mule 4 Mulesoft Basics Mulesoft Tutorial

Retry Mechanism – Until Success Vs Flow Reference

Published by:

Retry Mechanism – Until Success Vs Flow Reference

In mule 3 we have roll back exception strategy which enable’s the ability to retry the execution in case of error and define a separate flow to be executed once the retry count has exceeded.

In mule 4 you do have re-connection strategy which we can define on the connectors but that only retries in case of failure in connection. In Mule 4 we do not have roll back exception strategy, so in this tutorial we will be looking on how we can implement the same functionality in Mule 4.

To achieve this retry mechanism, we can use Until Successful, but the issue we will face are:

  1. We would not be able to specify any specific condition on which retry should happen . For Example: We will not be able to define retry only when HTTP status code is 202.
  2. We also cannot implement error flow, once an error has occurred. For Example: Every time an error is generated we need to send the error message on to a queue before retrying.

Scenario 1: We want to implement retry mechanism on Web service call, in case of error if HTTP status code is 502 then, API should retry its Web Service Call only 3 times.

To complete the above scenario, we will be using Flow Reference.

Flow Reference in Mule 3 was not able to call its own flow in which it was defined. But in Mule 4 you can call any flow even its own flow.

Flow Diagram:

All we need is to use is flow reference to call its own flow when an error is generated. We have moved HTTP Request to another flow “HTTPFlow” and is referred by flow reference in main flow “get:\users:test-config”.

Inside HTTPFlow we have HTTP Request call on which we have implement retry mechanism. In Error handling part, “On Error Continue” is checking for the retry count if it has reached to its max or not. Inside error flow of “On Error Continue” retry count value is getting incremented and after some seconds of sleep; flow reference will again call HTTPFlow. Once the retry count has reached to its max “On Error Continue” will no longer catch the error and the final error is throw back to its parent flow.

    <flow name="get:\users:test-config">
    <ee:transform xmlns:ee="http://www.mulesoft.org/schema/mule/ee/core" xsi:schemaLocation="http://www.mulesoft.org/schema/mule/ee/core http://www.mulesoft.org/schema/mule/ee/core/current/mule-ee.xsd" doc:id="86de922d-7d4d-4d0a-b010-e1cf9e23a79d">
            <ee:message>
                <ee:set-payload><![CDATA[%dw 2.0
output application/json
---
{
  userID: [
    "1", 
    "2"
  ],
  userName: "Varun",
  subject: [
    "Maths", 
    "Mule", 
    "TIbco"
  ],
  class: {
    name: "Class 10"
  }
}]]></ee:set-payload>
            </ee:message>
        </ee:transform>
    <logger level="INFO" doc:name="Logger" doc:id="897eb15a-c379-4051-ae78-21ebbbf33cd1" />	
      <set-variable value="1" doc:name="SetRetryCount" doc:id="ae08693c-0c8e-4397-b5e2-235b8b288821" variableName="retryCount" />
    <flow-ref doc:name="HTTPFlow" doc:id="84ab16f4-0fa5-4ac4-a73e-80dd7ab20ea0" name="HTTPFlow"/>
    <logger level="INFO" doc:name="Logger" doc:id="92727a36-d8ed-4ea1-8616-3c0537598400" />
    </flow>
  <flow name="HTTPFlow" doc:id="610bee6d-59f2-4f77-a29e-d60b88aaea01" >
    <logger level="INFO" doc:name="Logger" doc:id="38537854-3f21-48a7-a6a6-31907d8bca90" message="Calling HTTP request count - #[(vars.retryCount default 0)]" />
    <http:request method="GET" doc:name="HTTPCall" doc:id="c766093c-c7ac-444f-914d-cd4d1b70676d" config-ref="HTTP_Request_configuration" path="/abc">
      <reconnect />
    </http:request>
    <error-handler >
      <on-error-continue enableNotifications="true" logException="true" doc:name="On Error Continue" doc:id="8d23329f-b006-4a56-b6a7-6e33eb748957" when="#[(vars.retryCount as Number default 0) &lt; 3 and error.muleMessage.attributes.StatusCode == 503]">
        <logger level="INFO" doc:name="Logger" doc:id="1be75ffe-a4bf-4fe1-9802-ae1309d76341" message="#[error.description]"/>
        <set-variable value="#[(vars.retryCount default 0) +1]" doc:name="Increment retryCount" doc:id="a9877e1d-d1f5-4786-93e9-58126d08f3f4" variableName="retryCount"/>
        <scripting:execute doc:name="Sleep" doc:id="531bc61a-937d-4a0c-81ce-1ea0685ce64f" engine="groovy">
          <scripting:code >def duration = Long.valueOf('3000');
sleep(duration);
return message.payload;</scripting:code>
        </scripting:execute>
        <flow-ref doc:name="HTTPFlow" doc:id="3f37c302-ec9a-4751-ab4e-dcdefb2607f5" name="HTTPFlow"/>
      </on-error-continue>
    </error-handler>
  </flow>

 

Scenario 2: Here we want to implement retry mechanism on Web service call when a specific value is received. Example if a web service call returns a value 5 then retry should happen maximum 3 times else not.

Implementation:

We have moved HTTP Request to sub flow “testSub_Flow” and is referred by flow reference in its parent flow “post:\users:application\json:test-config”.

Inside testSub_Flow  we are using flow reference to call itself. Once we have received the response from web service call “Request“, Choice router we are routeing flow processing based on response received and number of retires number.

 

Mule 4 Mulesoft Basics Mulesoft Tutorial

Parallel For Each in Mule 4

Published by:

The Parallel For-Each scope enables you to process a collection of messages by splitting the collection into parts that are simultaneously processed in separate routes. After all messages are processed, the results are aggregated following the same order they were in before the split, and then the flow continues.

In the below tutorial we will see who we can use parallel for each in you project.

Download Parallel For Each Example

Syntax:

<parallel-foreach doc:name="parallel For Each" collection="payload">
<!-- Code to be Processed parallel -->
</parallel-foreach>

Parallel For Each

In this example we will send a JSON message as an array, which will be split by parallel for each and executed in parallel.
Inside parallel for each we are transforming the message received with a delay of 5 sec, so that we can clearly see in logs in our API has processed messages in parallel or not.

<sub-flow name="addUsersParallelForEach" doc:id="5820e110-740b-48aa-baf2-b4f0fa68716a" >
  <logger level="INFO" doc:name="Log Request" doc:id="53992e7f-84cf-4c29-bb74-2be27a2ececf" message="'request received - ' #[payload]"/>
    <parallel-foreach doc:name="parallel For Each" doc:id="81acc47f-7b50-4806-95ea-6e7f24cd6683" collection="payload">
      <ee:transform doc:name="Transform Message" doc:id="751032f3-11f6-4ce3-b136-73e534bd6224" >
        <ee:message >
          <ee:set-payload ><![CDATA[%dw 2.0
import * from dw::Runtime
output application/json
---
msg : payload.username ++ ' processed' wait 5000]]></ee:set-payload>
        </ee:message>
        <ee:variables >
        </ee:variables>
      </ee:transform>
      <logger level="INFO" doc:name="for-each output" doc:id="bfe598c4-6b02-4d36-8fa9-00d9cb2a8cce" message="for-each output:  #[payload]"/>
    </parallel-foreach>
    <set-payload value="#[%dw 2.0
output application/json
---
payload]" doc:name="Set Payload" doc:id="989fedef-40e2-4e74-87e6-577bebca3b4c" />
    <logger level="INFO" doc:name="Logger" doc:id="fa4eb2fb-a70c-4489-aaff-81eafa03213f" message="#[payload]"/>
  
</sub-flow>

Request:

Output:

Logs: In the logs we can see that the messages are processed in parallel.

Parallel Processing in Batches

In this example we will execute parallel processing but in batches. If we are connecting to an external system (suppose salesforce), and need to send the request in batches of 200 and all the batches should be executed in parallel.
How can we achieve this? is simple, by using divideBy function.

<parallel-foreach doc:name="parallel For Each" doc:id="3641e6b1-e499-4528-b6f6-d9ad7545368e" collection="#[import * from dw::core::Arrays output application/json --- payload divideBy 2]">

Here for example, if we receive 10 records. 10 records will be spit/divided into sets of 2 and 5 jobs will be created that will executed in parallel and processed.

In the below code we are dividing the payload received into set of 2, then transforming the message received with a delay of 5 sec so that we can clearly see in API logs if messages processed in parallel or not.

<sub-flow name="addUsersBatchParallelForEach" doc:id="aedbbefb-d38f-4ee1-a7e3-dc537645da5e" >
  <logger level="INFO" doc:name="Log Request" doc:id="1e9de3c9-cce7-4744-9010-c3b9b2a100ab" message="'request received - ' #[payload]"/>
    <parallel-foreach doc:name="parallel For Each" doc:id="3641e6b1-e499-4528-b6f6-d9ad7545368e" collection="#[import * from dw::core::Arrays output application/json --- payload divideBy 2]">
      <flow-ref doc:name="Flow Reference" doc:id="2bf73bb0-1916-47fd-967d-4cdde18428f3" name="addUsersSub_Flow_BatchParallelForEach"/>
    </parallel-foreach>
    
    <set-payload value="#[%dw 2.0
output application/json
---
flatten (payload.payload)]" doc:name="Set Payload" doc:id="bbee3431-7975-4528-93cf-3955ee4011cc" />
    <logger level="INFO" doc:name="Logger" doc:id="fffae7bf-573d-4b27-9a69-26ecedde5d78" message="#[payload]"/>
  
</sub-flow>
  <sub-flow name="addUsersSub_Flow_BatchParallelForEach" doc:id="eb15de26-5035-47ab-8183-4e6efbe49b80">
  <ee:transform doc:name="Transform Message" doc:id="eaaebb0c-539a-4269-8295-2701b5c6397a" >
        <ee:message >
          <ee:set-payload ><![CDATA[%dw 2.0
import * from dw::Runtime
output application/json
---
(payload map {
  msg : $.username ++ ' processed' 
}) wait 5000 
]]></ee:set-payload>
        </ee:message>
        <ee:variables >
        </ee:variables>
      </ee:transform>
      <logger level="INFO" doc:name="for-each output" doc:id="198b6135-10e6-4882-bd1d-1686dd3f49fd" message="for-each output:  #[payload]"/>
  </sub-flow>

Output:
To get only the message payload received after processing we are using flatten (payload.payload)

Logs:

Mule 4 Mulesoft Basics Mulesoft Tutorial

Executing Dataweave Dynamically

Published by:

In case we want our Dataweave expression outside mule project, load and process it at runtime then you would need Dynamic Evaluate component.

Download Dynamic Evaluate Project Example

In a scenario wherein dataweave mapping conditions are expected to change frequently based on client’s requirements and you don’t want to redeploy running APIs again and again, in such scenario we can store our dataweave expression in a DB or S3 or other location and access and process it dynamically in our Mule API. Any changes made to this external datawave will be picked up by Mule while reading it from external source and processed.

In the below example we are using variable dynamic_dw to store the datawave expression as a String. In a real world this datawave expression should be coming from an external source like DB or SFTP or others and getting stored in a variable.

Request:

Response:

Code:

<sub-flow name="dynamic-evaluateSub_Flow" doc:id="2145c3c0-5196-418f-9dd3-adf06966cc4a" >
  <set-variable value="#[%dw 2.0 
output application/json 
---
payload]" doc:name="Store payload" doc:id="18b55a69-28fd-48ae-9344-f80e9be3ffc6" variableName="reqReceived"/>
  <set-variable value="#['%dw 2.0 output application/json --- vars.reqReceived.username']" doc:name="datawave received from external source" doc:id="29aebfba-73e4-41b3-9f3f-e508f98da413" variableName="dynamic_dw"/>
  <logger level="INFO" doc:name="datawave received" doc:id="672895f7-42e8-4c25-8dca-b11bd61634b3" message="#['script - ' ++ vars.dynamic_dw]"/>
  <ee:dynamic-evaluate doc:name="Dynamic Evaluate" doc:id="59d877bb-36ba-4193-bf72-df5083a06d22" expression="#[vars.dynamic_dw]"/>
  <logger level="INFO" doc:name="output" doc:id="fea7a7e5-b8d3-4726-80b8-a846c3794a71" message="#[payload]"/>
</sub-flow>

 

Mule 4 Mulesoft Tutorial

Error Handling In Mule 4

Published by:

In this tutorial of “Error Handling In Mule 4” we will be understand about various types of error handling and how we can implement it in our project with an example.

There are 3 types of error handling mechanism in Mule 4.

  1. On Error Continue
  2. On Error Propagate
  3. Try Catch Scope


On Error Continue


On-Error Continue catches the error, and do not report it as an error; thus the processing of the flow continues even after the error has occurred. This error handler can be used in flows where you don’t want to stop the flow processing even if an error has occurred.

For example in the below flow, the parent flow will execute till the end even if web consumer has returned an error.

SchedulerFlow is calling flow callWebService flow, in case of any error at point 9 (at web service consumer) the flow will process as follows: 1->2->3->7->8->9->12->13->4.
Here at point 13 the error is send to its parent flow (SchedulerFlow) as flow message, and parent flow executes its processing further.

On Error Propagate


On Error Propagate works exactly as Mule 3 Catch exception strategy. In case on any error, On Error Propagate processes the error message and re-throws the error to its parent flow. No further processing is done on that particular flow.

For example in the below Flow, when flow execution starts, point 1, 2, 3 will execute first, on error at point 3 the error is catch by on-error propagate and error processing begins with point 6, 7; once the error handling flow is completed the flow processing ends and an error is re-thrown to its parent flow.

In can of no error or happy scenario point 1,2,3,4,5 are executed, in case of error at point 3; point 1,2,3,6,7 are executed.

In the second example below, SchedulerFlow is calling flow callWebService flow, in case of any error at point 9 (at web service consumer) the flow will process as follows: 1->2->3->7->8->9->12->13->5->6.
Here at point 13 the error is thrown to its parent flow (SchedulerFlow), and parent flow error handler is invoked.

Try Catch Scope


Try catch scope can be used within a flow to do error handling of just inner components. Try catch scope can be very useful in cases where we want to add separate error processing strategy for various components in the flow.

For example: In case of error at point 3 (at web service consumer) the flow will process as follows: 1->2->3->7->8->10->11.
In case of error at point 5 (at saleforces connector) the flow will process as follows: 1->2->3->4->5->9->6.

 

Configuring On-Error Continue and On-Error Propagate


As in Mule 3 we had to specify which error is to be catch inside the catch exception strategy, same we can do in Mule 4 with even more control.

In Mule 4 we can specify Error Type and/or When Condition which when is evaluated true that particular error handler is executed. In case none error handler catches the error the error is re-thrown to its parent flow. 

Error Type: This matches with the type of error that is thrown. Error Type are auto populated based on connectors used in the flow. It contains the list of errors that the connectors can throw in the flow.

 

When Condition: The expression that will be evaluated to determine if the exception strategy could be executed. This should always be boolean expression. 

In below example when variable errorCount is greater than 3 then only that particular error handler is invoked.

Mule 4 Mulesoft Tutorial

Creating MUnits Mule 4

Published by:

In this tutorial, we will be creating Munits for a simple flow that listens over REST HTTP, send the request to salesforce (via a salesforce connector) and returns a JSON Message in response. The response returned, will be asserted with the expected response.

Creating Munits


To build Munits you need to right click API router and select “Create Test Suite for [File Name] from RAML”. This will auto create a basic structure of Munits for you.

The Munits auto created will generally have:

  • All the Munits generated for flows that are mentioned in that RAML or WSDL.
  • Each Munit flow will have “Set Payload” component that will contain the request message that is needed before starting the flow. This request message is auto picked from RAML if example is defined. MunitTools::getResourceAsString reads the file been specified.
  • In Execution section in each Munit flow there will be a flow starter, that will send the request message to a specific flow that needs to be tested out. It can be a flow reference, VM, HTTP “Request” in case of REST service or “Consumer” incase it’s a SOAP service based on the flow to be tested. Since we have build a REST service using RAML; Mule 4 automatically adds an HTTP “Request” component to it. **You might need to configure HTTP Requester or Web Consumer inside Munits so it can call your API’s endpoint.
  • In Validation section; Mule 4 auto adds assertions. One checks for the HTTP status code been returned by the API and other on checks for the final response returned by the Mule flow and compared it with the expected response. The expected response is auto picked by Mule 4 if its already defined in RAML inside response example.

 

Running Munits


You can go ahead and run Munits by right clicking and selection “Run MUnit Suite”.

You can also run Munits from command prompt, just open your command prompt and go to the project root folder and type “mvn test”. This will run you MUnits from command prompt.

Why To Mock Connectors?


If we are not mocking our connectors, on running munits mule 4 will actually connect post request to the external environment through connectors used in out project. In this project since we are using salesforce connector to connect to salesforce environment, on running munits the flow connects to salesforces environment and post its request there. This can be a problem if we want to deploy our application on Production servers; data can get modified even before Mule APIs is deployed successfully.
Thus, mocking all your connectors, ensures that it doesn’t connect to external environment and uses predefined response every time.

How to Mock?


To Mock a connector, we need to place “Mock When” in Behavior section. And define its configuration.

You can set the processor attribute to define the processor to mock with the connector namespace and operation; and the with-attribute element to define the connector’s attribute name and value so that mule can identify which connector is to be mocked. “Then-return” you can define the message that is to be returned by the connector.

With this much configuration we are done with our MUnits.

Mule 4 Mulesoft Basics Mulesoft Tutorial

Creating Mule 4 Project with RAML

Published by:

Creating Mule 4 Project with RAML


In this Mule tutorial we will learn how to Create Mule 4 project with RAML and a detailed walk-through on how the Mule flow works in case of a success or error scenario:

Mule ESB – What is RAML and why it’s used


RAML stands for RESTful API Modeling Language and is similar to WSDL. A RAML provides a structure to the API which is useful for developers to start there development process and also helps client who is invoking the API to know before hand what the API does.

A RAML contains:

  1. Endpoint URL with its Query parameters and URI parameters,
  2. HTTP methods to which API is listening to (GET, POST, PUT, DELETE),
  3. Request and response schema and sample message,
  4. HTTP response code that an API will return (eg: 200, 400, 404, 500). Continue reading
Mule 4 Mulesoft Basics Mulesoft Tutorial

Inbound Outbound Properties

Published by:

In this “Inbound Outbound Properties” tutorial of Mule 4 we will look on how we can set and modify Mule Inbound and Outbound Properties.

In Mule Inbound properties referees to the additional information that comes to an Mule API along with the message body/payload itself. It may consist of inbound Headers, Query Params, URI Params, HTTP method etc.
In Mule Inbound properties are preset by the sender of the message thus cannot be added or modified.

Mule Outbound Properties are headers and properties that Mule API set before ending its request to other external systems.

Inbound Properties
In Mule 3 we used to access inbound properties by #[message.inboundProperties]

Whereas in Mule 4 we access these properties by #[attributes]

Example
We have create a simple project using RAML.
The GET method of the RAML has URI Param – user_id, which can assess by #[attributes.uriParams['user_id']]

Similarly to access Query Param we do it by #[attributes.queryParams['code']]

To view all the Inbound Properties that are received by a Mule API:

#[attributes]


Output :

 

Outbound Properties
As in Mule 3 we used to set outbound properties via using Set Property Component.
In Mule 4, outbound properties no longer exist. Instead, the headers or properties (e.g. HTTP headers or JMS properties) that you wish to send as part of a request or message (e.g. HTTP request or JMS message) respectively are now configured explicitly as part of the connector operation configuration. 
Example:
To Set the outbound HTTP headers and HTTP status code for a Mule API we need to modify the HTTP Listener Configuration.

SoapUI Output –

Mule 4 Mulesoft Basics Mulesoft Tutorial

Mule 4: JSON Schema Validation

Published by:

JSON Schema is a specification for JSON based format for defining the structure of JSON data. It validates input data at runtime and verifies that they match a referenced schema or not. We can match against defined schemas that exist in local file or in an external URI.

If the payload is incorrect with given JSON schema, then compiler throws below Exception:

org.mule.module.json.validation.JsonSchemaValidationException: Json content is not compliant with schema

Use Case:

Validating the input JSON payload against with JSON Schema.

JSON Payload:

{
  "firstName": "Murali",
  "lastName": "Krishna",
  "age" : 26
}	

JSON – Schema :

{
  "$schema": "http://json-schema.org/draft-04/schema#",
  "type": "object",
  "properties": {
    "firstName": {
      "type": "string"
    },
    "lastName": {
      "type": "string"
    },
    "age": {
      "type": "integer"
    }
  },
  "required": [
    "firstName",
    "lastName",
    "age"
  ]
}

Mule Flow:

Step -1 :

Configure the HTTP Listener with by giving hostname, port number and path along with this specify allowed methods (Optional) at an Advanced tab of HTTP connector.

Step-2:

Drag and Drop the JSON Validate Schema from Mule Palette to validate the input payload. And provide the schema path. In my case it is like below:

schemas/Sample-Schema.json

From above line,

schemas –> It is directory

Sample-Schema.json —> It is JSON-Schema structure for validation.

Syntax of JSON Validator as below:

<json:validate-schema doc:name="Validate schema" doc:id="5a8b10e1-59e8-4f68-9aaa-303c9cb5c9d6" schema="schemas/Sample-Schema.json">

Step-3:

Drag & Drop the Logger component to log the resultant payload after validation.

Final Config.xml:

<?xml version="1.0" encoding="UTF-8"?>

<mule xmlns:json="http://www.mulesoft.org/schema/mule/json" xmlns:validation="http://www.mulesoft.org/schema/mule/validation"
  xmlns:ee="http://www.mulesoft.org/schema/mule/ee/core"
  xmlns:http="http://www.mulesoft.org/schema/mule/http" xmlns="http://www.mulesoft.org/schema/mule/core" xmlns:doc="http://www.mulesoft.org/schema/mule/documentation" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/current/mule.xsd
http://www.mulesoft.org/schema/mule/http http://www.mulesoft.org/schema/mule/http/current/mule-http.xsd
http://www.mulesoft.org/schema/mule/ee/core http://www.mulesoft.org/schema/mule/ee/core/current/mule-ee.xsd
http://www.mulesoft.org/schema/mule/validation http://www.mulesoft.org/schema/mule/validation/current/mule-validation.xsd
http://www.mulesoft.org/schema/mule/json http://www.mulesoft.org/schema/mule/json/current/mule-json.xsd">
  <http:listener-config name="HTTP_Listener_config" doc:name="HTTP Listener config" doc:id="8a601d72-5913-4ed7-99d3-707601301ec9" >
    <http:listener-connection host="0.0.0.0" port="8080" />
  </http:listener-config>
  <flow name="abcFlow" doc:id="30917fd1-0429-4ec7-9d7d-aa8d4d19413e" >
    <http:listener doc:name="Listener" doc:id="2b89fed0-69ce-47eb-93bf-3bd0628fe188" config-ref="HTTP_Listener_config" path="abc" allowedMethods="POST">
      <ee:repeatable-file-store-stream />
    </http:listener>
    <json:validate-schema doc:name="Validate schema" doc:id="5a8b10e1-59e8-4f68-9aaa-303c9cb5c9d6" schema="schemas/Sample-Schema.json">
    </json:validate-schema>
    <logger level="INFO" doc:name="Logger" doc:id="26b62866-2f25-4374-9d95-9fe14c052366" message="Payload is Validated ----&gt; #[message.payload]" />
  </flow>
</mule>

Success Scenario:

Failed Scenario:

Thank you!

Please feel free to share your thoughts in the comments section.

Mule 4 Mulesoft Basics Mulesoft Tutorial

Mule – 4 DataWeave Functions – Part – 1

Published by:

In DataWeave 2.0 functions are categorized into different modules.

  1. Core (dw::Core)
  2. Arrays (dw::core::Arrays)
  3. Binaries (dw::core::Binaries)
  4. Encryption (dw::Crypto)
  5. Diff (dw::util::Diff)
  6. Objects (dw::core::Objects)
  7. Runtime (dw::Runtime)
  8. Strings (dw::core::Strings)
  9. System (dw::System)
  10. URL (dw::core::URL)

Functions defined in Core (dw::Core) module are imported automatically into your DataWeave scripts. To use other modules, we need to import them by adding the import directive to the head of DataWeave script, for example:

import dw::core::Strings

import dasherize, underscore from dw::core::Strings

import * from dw::core::Strings

Sample Payload:
{
"firstName" : "Murali",
"lastName" : "Krishna",
"age" : "26",
“age” : ”26”
}

1. Core (dw::Core)

Below are the DataWeave 2 core functions:

++ , –, abs, avg, ceil, contains, daysBetween, distinctBy, endsWith, filter, IsBlank, joinBy, min, max etc….

result : [0, 1, 2] ++ [“a”, “b”, “c”] will gives us “result” : “[0, 1, 2, “a”, “b”, “c”]”

result : [0, 1, 1, 2] — [1,2] will gives us “result” : “[0]”

result : abs(-20) will gives us “result” : 20

average : avg([1, 1000]) will gives us “average” : 500.5

value : ceil(1.5) will gives us “value” : 2

result : payload contains “Krish” will gives us “result” : true

days: daysBetween(“2016-10-01T23:57:59-03:00”, “2017-10-01T23:57:59-03:00”) will gives us “days”: 365

age : payload distinctBy $ will gives us  :

 {

“firstName” : “Murali”,

“lastName” : “Krishna”,

“age” : ”26”

}

a: “Murali” endsWith “li” will gives us “a” : true

a: [1, 2, 3, 4, 5] filter($ > 2) will gives us “a” : [3,4,5]

empty: isBlank(“”) will gives us “empty” : true

aa: [“a”,”b”,”c”] joinBy “-” will gives us “a” : “a-b-c”

a: min([1, 1000]) will gives us “a” : 1

a: max([1, 1000]) will gives us “a” : 1000

2.Arrays (dw::core::Arrays)

Arrays related functions in DataWeave are :

countBy, divideBy, every, some, sumBy

[1, 2, 3] countBy (($ mod 2) == 0) will gives us 1

[1,2,3,4,5] dw::core::Arrays::divideBy 2 will gives us :

[

[

1,

2

],

[

3,

4

],

[

5

]

]

 

[1,2,3,4] dw::core::Arrays::every ($ == 1) will gives us “false”

[1,2,3,4] dw::core::Arrays::some ($ == 1) will gives us “true”

[ { a: 1 }, { a: 2 }, { a: 3 } ] sumBy $.a will gives us “6”

3.Binaries (dw::core::Binaries)

Binary functions in DataWeave-2 are:

fromBase64, fromHex, toBase64, toHex

toBase64(fromBase64(12463730)) will gives us “12463730”

{ “binary”: fromHex(‘4D756C65’)} will gives us “binary” : “Mule”

{ “hex” : toHex(‘Mule’) } will gives us “hex” : “4D756C65”

4.Encryption (dw::Crypto)

Encryption functions in Dataweave – 2 are:

HMACBinary, HMACWith, MD5, SHA1, hashWith

{ “HMAC”: Crypto::HMACBinary((“aa” as Binary), (“aa” as Binary)) } will gives us :

“HMAC”: “\u0007£š±]\u00adÛ\u0006‰\u0006Ôsv:ý\u000b\u0016çÜð”

Crypto::MD5(“asd” as Binary) will gives us “7815696ecbf1c96e6894b779456d330e”

Crypto::SHA1(“dsasd” as Binary) will gives us “2fa183839c954e6366c206367c9be5864e4f4a65”

5.Diff (dw::util::Diff)

It calculates difference between two values and returns list of differences.

DataWeave Script:

%dw 2.0

import * from dw::util::Diff

output application/json

var a = { age: “Test” }

var b = { age: “Test2” }

a diff b

Output:

{

“matches”: false,

“diffs”: [

{

“expected”: “\”Test\””,

“actual”: “\”Test2\””,

“path”: “(root).age”

}

]

}

Note:

Rest of the things will proceed in Mule – 4 DataWeave Functions Part – 2 article

Mule 4 Mulesoft Basics Mulesoft Tutorial

DataWeave 1.0 to DataWeave 2.0 Migration – Part -1

Published by:

DataWeave is a new feature of Mule-3 that allows us to convert data to any kind of format, such as XML, CSV, JSON and POJO’s etc. In Mule 3, we use both MEL and Dataweave for writing the mule messages. Among these, MEL is default expression language in Mule 3 But this approach had some data inconsistencies and scattered approaches. To avoid the stress of converting data objects to Java objects in Mule 3 every time by the usage of expressions Mule 4 was launched. In Mule 4 DataWeave is the default expression language over Mule 3’s default MEL.

In Mule-4 DataWeave version has changed from 1.0 to 2.0.

Apart from syntax changes, there are many new features in DataWeave 2.0

Continue reading