Saturday, 7 July 2018

CReW Needs Your Opinion

What's your opinion man?

While conducting CReW macro training at Inspire18 with Joe Miller, we revealed several common use cases where the "Expect Equal (Beta)" macro was a solution.  In fact, Joe Miller has begun incorporating the macro into his weekly challenges as a way of confirming the challenge results.  We got a question from the class about this macro which I'm about to share:  "Can you set which fields to check between the two files (instead of checking all fields)?"

What are your thoughts?  My response was to "Select" the same fields from both sets of data for comparison prior to the macro.  Adam was in the room and while I had asked him prior to the conference about why the "CReW Test" macros were still in Beta and his response was that he hadn't received any feedback on them from the Alteryx Community.

Did you know?

  • Expect Equals requires every Field to be in the same order?
  • Expect Equals is case sensitive for Field Names?
  • Expect Equals is NOT case sensitive for data?
Are these bugs or are they features?  I don't know.  But I did have fun with some exploration.  I wanted to create a CReW Delta macro.  It would function similarly to the Expect Equal macro in that nothing happens when everything is equal.  Above that, it would output every difference found for every record compared.  It would also provide an error message explaining where the differences were found.

Error: CReW Delta (WIP) (3): There were 1 changed records found.  2 fields had differences:  {Field4, Field5}.  1 unmatched left records and 0 unmatched right records.

In this macro, we compare records and output the Sequence number of each input file along with the field names where differences are found.  The left and right value pairs are easily reviewed from the "Delta" output anchor.

This works wonderfully with 2 records.  We wanted to explore the performance for millions of records.  The number of records along with the number of fields compared will create challenges for computing resources.  We experimented with 2 approaches for comparing the data and the results may surprise you.

Approach #1:  Do it like the CReW Expect Equal Macro

Transpose the data into Name + Value pairs and JOIN the results.  When they match, do nothing and when they don't, then they are errors.

Pros & Cons:

  • You will be creating one (1) record for every field in the incoming data times the number of incoming records time two (2).  Comparing 2 million records with 20 fields will result in 80 million comparisons.  This seems to be a Con.
  • It is very easy to configure this macro.  This seems to be a Pro.
  • We don't have an interface tool to allow you to select which fields to compare.  This was a con for our CReW student.

Approach #2:  Use MD5 Hash on each record and compare the hash results

For each "selected" field on the incoming records, HASH the selected fields and compare the Hash value in a join that requires NO sort. Where the expected values differ, they are errors.

Pros & Cons:

  • You will be comparing one field for every record.  This seems to be a Pro.
  • It is complex to create the dynamic formula to create the HASH function.  This seems to be a Con (for our maintenance).
  • We included an interface tool to allow you to select which fields to compare.  This was a pro.


With 2 million input records (4 fields) approach #1 took roughly 10 seconds to complete.  With the same input, approach #2 took roughly a minute to complete.  Why did it take so long?  Because the conversion of all fields to strings and the creation of the MD5 Hash was an expensive function to call.  Sorting and handling small records was FASTER than creating the dynamic formula and hash.

Please take a look at the macro and let us know your thoughts.  I appreciate your feedback and want to know if there are features or functions that you would want in this macro if we do decide to make it into a future release.

Macro Location



Wednesday, 13 June 2018

CReW - Who would cross the Bridge of Death must answer me these questions three

#Alteryx18 marked the beginning of a journey for me.  I've long been on this quest, but now take the lead as Adam focuses his efforts on family and Alteryx e2.  My first CReW action was the creation of new SWAG.  These die-cut stickers were gobbled up in Anaheim and I've reserved more for my friends at #Alteryx18 in Europe.

Second on my list of accomplishments was the assumption of the CReW Macro training (only scheduled in Anaheim for #Alteryx18).  Together with Joe Miller from Alteryx we either wowed the attendees or we gave sufficient pleas for good survey results (9.6/10.0).  Putting CReW macros to the "test" and opening eyes about how to create value from these gems really sunk home as users (beginner to expert) were enlightened by their seeming simplicity, but awesome results.  "Oooohs and Aaaahs" abounded as we delighted the students with tips and tricks.  Defensive configuration and use of CReW tools took center stage as Joe provided step by step instructions and I added colour to the session.

Where do we go from here?  I want to hear from you!  What ideas and what needs do you have?  I've created some beta macros (they are in the macro district of the Alteryx gallery), but CReW is more than just one person.  Whether your an Alteryx guru or a newbie I can use your skills.  Macros need naming, icons need designing, and we also need to manage the CReW inventory.  Please let me know your thoughts and availability to join the CReW.  

I'm very honored and excited to carry this torch.  Join me and let's make Adam proud.


Wednesday, 2 May 2018

Passing The Baton

The theme of this year’s Inspire conference is “Alter Everything” and Alteryx has truly changed my life, it has brought me new friends, new skills, given me the chance to live on a different continent and through all of that the opportunity to build the product that I love.

But times change, and with that said I am sad to announce that I am stepping down as the curator of the CReW macros. Which if I am honest I think already happened a couple of years ago, it has just taken me this long to realise it and make it official. There is no big reason, my life has just changed: I have a family who I want to spend my free time with and the time I don’t spend with them I want to get out of the house/office and exercise or see friends (and there is this little project at work I am working on that really eats up all of my thinking time right now). So apologies to everyone who has sent me bug fixes or new macros over the last couple of years and joined my list of “things I need to get to” I realise now I am not going to get to that list.

But in thinking of what next for the CReW macros there really was only one person who I could think I might pass the reigns to (that pun just wrote itself). A long term contributor to the macros, a good friend and all round legend of Alteryx, Mr. Mark Frisch or as some of you may know him MarqueeCrew. I will let Mark post a welcome post here soon with his ideas for the macros, but all there is left for me to do is thank Mark for taking over and for all of you for all of your suggestions and help with the macros over the last 4 years.

Hope to see many of you at one of the Inspire conferences this year.

(Former curator of the CReW macros)

Wednesday, 14 June 2017

Inspire Las Vegas

Last week I had the pleasure of attending my 9th Alteryx Inspire conference at the Aria hotel in Las Vegas, Nevada.  As always I enjoyed meeting all of our customers, partners and Aces and left the conference feeling exhausted but inspired.

Here are my top 10 personal highlights from the event:

1. The Solution Center - This remains my all time favourite part of Inspire.  A chance to meet with customers and help solve their challenges with Alteryx.  When I'm not booked in anywhere else this is where I spend all of my time.  Came away this year with ideas for a couple of new CReW macros, plus I got a couple of Alteryx Aces to help me out with a problem I was having on a personal Alteryx project.

2. 9 Inspires but 6 years - Explaining to people how I'd been to 9 Inspire conferences, but only been with Alteryx for 6 years (One conference as a client, an extra for London last year and fence post counting)

3. Training - Very excited this year to be able to deliver a training course on the CReW macros. 
Despite a few installation issues surrounding apostrophes in the install path, we got through the macros I wanted to cover and I hope that everybody learnt something.  Along with my course there was a huge range of other training courses which I know were very much enjoyed.

4. CReW Macros in the Keynotes - Was very pleased that the CReW macros were mentioned in both Dean's and the product keynote.  Great to see the recognition for the work we have done as a community building these macros.

5. Mark Frisch named Community Top Contributor of the Year - When the award category came up on the screen I knew there was only one person who was going to win it this year.  Fantastic to see all the amazing work Mark does in the community being recognised.

6. New CReW icon - Created by the very talented Tara McCoy the CReW macro pack now has its own icon.  Thanks Tara!

7. Alteryx Periodic Table - More awesome artwork from Tara and given away at Inspire: The Alteryx Tools as a periodic table!  Check it out and get your own copy over on the community!

8. Closing Keynote - I always enjoy the closing keynotes at Inspire and this year was no exception with an entertaining talk from Freakonomics author Stephen Dubner.  Some great lessons in story telling and data collecting.

9. Alteryx Card Game - I enjoy playing card games, but not sure I have ever featured in one before!

10. Ned's Keynote - Great as always to see Ned talk about the product.  As an employee of a public listed company I am unable to repeat here what he covered.  But if you are interested there are still tickets available for Inspire Europe in London in September!  Hope to see you there.

Thursday, 2 June 2016

Crew Macro Pack - 2016 Q2 Release

The latest release of the CReW macro pack is available for download now at

NOTE: You will require Alteryx 10.0 or later to use this macro pack.

For installation instructions see this page


New Macros

Wait A Second - This month I am very excited to be able to introduce another new contributor to the CReW macro pack: Daniel Brun.  Who as well as contributing this macro here has also released his own macro pack "The Inviso macro pack for Alteryx" available for download here.

The Wait A Second macro is a different spin on the throttle tool which allows a user to specify a wait time between records passing through the tool.

Blocking Test Tool - This macro came directly from a client request and is actually something which I have seen asked for many time before.  It is very useful when updating a external data source and you want to check some condition before you write *any* data out.  It checks all of the data against the given expression and only if all of the records pass that expression does it then pass its records downstream.  If any record fails the expression then an error message is thrown and no records are output.

The Test Macros

These are four new macros which were born out of a series of blog posts I wrote back in 2015 about trying test driven development for Alteryx macros.  That experiment actually hit some stumbling blocks which I am yet to write up about, but the macros that were born out of it live on and are very useful for testing in a TDD style or otherwise.

I am releasing them in this release with a BETA flag as they are still very much a work in progress and I know I still have some work to do on them.  However I believe they are in a good enough condition to get some other people looking at them and feeding back any issues and improvements they might find.

They are under their own tool category "CReW Test" which you can hide for now if you want to wait for them to evolve some more.

They are
Test Runner - This is the macro which runs all of your tests.  You generally have a workflow containing only this macro.  Running that workflow will run all of your tests.

Expect Equal - This macro takes two data streams (Expected and Actual) and throws a (hopefully useful) error message if they differ.

This is the macro where I know I have some work to do and where you as a community can help out.  I need any examples you can find where this tool isn't doing what you expect.  So data sets you think are equal, but it thinks are different and vice versa.

Expect Zero - This macro expects to receive zero records into it and if it doesn't throws an error telling you.

Expect Error - This macro is for checking that workflows correctly error.  An essential part of testing.  If it is placed within a tool container then it expects a tool in that same tool container to throw the expected error message.  If the correct error message is not thrown then the Test Runner will mark that test as failed.
I intend a follow up article on how to use the macros, but to get started quickly take a copy of the GettingStartedTemplate folder in TestMacros, rename it to something meaningful and run the contained RunTests.yxmd to run the sample tests.

You can also take a look at the CReW macro test suite which you can find in Samples\CReW Macros\Tests.  Again a work in progress, but the starting of what is to follow.

As always please let me know about any issues/feedback you have and hopefully see many of you at Inspire next week!

(If you are not registered already what are you waiting for?

Friday, 9 October 2015

Crew Macro Pack - 2015 Q3 Release

It it with great pleasure that I can announce the next release of the CReW macro pack.

It is available for download now at

Or for anyone still on 9.5

It has been a long time coming I know, but I hope you enjoy the new content.  I'm pleased to say this pack has three new contributors from the community, something which I hope we can continue with future releases.

Installation Instructions

  1. Download the zip file above
  2. Extract it to a location on your machine where you want to keep the macros.  (Not in the Program Files/Alteryx folder or an existing Macros folder)
  3. Run Install.yxwz

Community Supported

These macros are not an official Alteryx release and as such they have not been through the full testing process that tools in the product have.  Some of them are prototypes and experimental, but that is some of what makes them so fun.  

If you find an issue let me know either in the comments below or drop me personally a note and I'll take a look, or if you can see how to fix it yourself then even better!

What's New?

New Macros

Wildcard XLSX Input - This macro was contributed by Joe Mako and came from a discussion on this community post.  Ever wanted to read in multiple XLXS files and worksheets?  Then this is the macro for you!  (This is the only macro in the pack which is currently missing from the 9.5 version as it makes use of 10.0 features)

Description - "Reads in multiple sheets from multiple Excel files from a directory. Optionally can filter out sheets, can pull field names from a header row, can auto configure field types, can sample N random sheets, and if the headers do not match then raw data for the sheets that did not match will be in the 'N' output."

Chi Square Test - Our second new macro author this month, this one is from Mark Frisch or MarqueeCrew another very active member of the new community site.

Description - "Have you created a sample set and wanted to know if there was a bias in the sample?  Random doesn't always mean you get an even quantity of data across your variables.  This test will check to see if the distribution of domain values (for a specific variable) are representative of the parent population.

Best to double check your samples before you run your tests!"

Moving Summarize - This is a macro I wrote almost a year ago now and featured on this blog here.  But it really has been that long since the last macro pack release!

Description - "Calculate moving averages or any type of moving summarize"

Trim Mean - This macro was written by me and was inspired from the recent Alteryx roadshow in London, with a user requesting this functionality which is available in Excel.

Description - "Returns the mean of the interior of a data set. Trim Mean calculates the mean taken by excluding a percentage of data points from the top and bottom tails of a data set. You can use this macro when you wish to exclude outlying data from your analysis."

Bug Fixes

Missing Tool quick descriptions - A big thanks to David Zurbuchen for both spotting these were missing, but better still taking the time and effort to fix them.  A great example of how you can contribute to the macro pack with small but important fixes.

Runner Macro fixes - There were a number of issues with the runner macros in the first release of them.  Many thanks to all of the people who both reported them and spent time with me helping to work out exactly what it was that caused them.  There is one outstanding issue that I know of which involves uni-code characters in the module path.

Add Totals Macro - Now works when there are spaces in the field names.

Get Involved

I want to thank again the contributions from Joe, Mark and David.  You guys have given me the motivation to get this going again.

And on that note I want to encourage anyone else who is interested in getting involved to get in touch either in the comments here or via email.  I really want to make this a community effort and try and make the releases a bit more regular.  There is plenty to do on the existing macros, samples, help files and I also want to try and make some more progress on my Test Suite macros.

Till next time.

Thursday, 23 April 2015

Testing Alteryx Macros - Part 4 - Testing For Errors

At the end of my last post we decided the next test to write was

"If a user tries to total a string column they get a meaningful error message."

This is going to require some new functionality to the test framework; as for this test to pass we will need the Alteryx module for it to fail (and not just fail, but fail with a very specific message).  I am going to start with how a user of the test framework (i.e. a macro developer) would do this and then we'll take a quick look at how the test framework works for this feature.

Write a Failing Test

For this test we will use a new macro in the test framework the Expect Error macro:

And we write a test which looks like this:

  • The Text Input has a single Text field called field 1 containing "Some Text"
  • The Expect Error macro has a single configuration option "Expected Error Message" which we set to the error message we expect to get in this scenario "Not possible to total Field1 as it is a non-numeric type: String"
Now if we run our test runner module we get the following output

So we get a message telling us what error message we were expecting to get from which tool(s) and then messages telling us what error messages we actually got (in this case the standard formula message about trying to use a string when a number was expected).

We can make one slight improvement to this test, in that we know exactly which tool we are expecting the error message to come from, so we should specify that in the test.  We do that by enclosing the Expect Error tool in a tool container with which ever tools we expect the error to come from:

Giving us the following log from the test runner:

Make The Test Pass

Having a failing test we can now add functionality to the macro to make it pass.  Which is a fairly straightforward process:

  • We already had a Field Info tool which gave us the types of all of our fields.
  • New filter tool filters off any fields of type String.
  • Then a message tool throws the correct error message to the macro user configured like so:

Run the Test Runner again and:

Success! We have written our first unit test which tests for an error.

So How Does It Work?

I said at the start that after we had used the new functionality we would take a look at how it worked.  This detail is not necessary to be able to successfully use the test framework, but might be interesting to see what's going on behind the scenes.

The Expect Error Macro

The contents of the Expect Error Macro are probably the most boring Alteryx macro you will ever see.  It looks like this:
That's right apart from the question it is empty!  All of the "magic" happens in the Test Runner macro.

The Test Runner Macro

This is looking a bit more complicated than last time we saw it and actually probably could do with being re-factored into a few sub macros:

  1. The macro here looks inside each of the Test Modules and parses out the Tool XML.  It is looking for Test Modules which contain the Expect Error macro.
  2. Modules which don't contain the Expect Error macro go this top route and get run as we saw previously.  If the module errors it fails the test and reports to the output log.
  3. Modules which do contain the Expect Error macro are also run with the Runner macros but this route the test fails if the test module runs without errors.
  4. This section again parses out the module XML and works out what error messages and from what tools the Expect Error macro is expecting.
  5. This join tool matches the expected error messages against the error messages which came out of the log.
  6. The final section of tools deals with reporting any mismatches between the expected and actual error messages and reporting the success or failure of the test.