For a long time our list of example DataHub scripts has contained a script named MakeArray.g, that lets you build an array point from any number of individual points.  When the value of any of the points changes, the corresponding value in the array changes, too.

BreakArrayOutputRecently a DataHub user asked if there isn’t a way to script the opposite.  His data source provides data in an array format, but his client needs the individual points.  So he asked if it would be possible to break up an array in the DataHub, using a script.

The result is a new script: BreakArray.g.  We have posted the code here, and will add it to the documentation and the next release of the Cogent DataHub.  Enjoy!

BreakArray.g code:

/*
 * Break an array point into individual points.  For example, if we
 * have a point 
 *		default:myarray = [ 1, 2, 3 ]
 * then we would like to create an output like
 *		default:myarray_0 = 1
 *		default:myarray_1 = 2
 *		default:myarray_2 = 3
 *
 * To set up an array to be broken into individual points, call .MonitorArray().  This
 * method takes a point name (of the array) a format string and an index offset.  The
 * format string specifies a suffix to add to the base point name, and the index offset
 * determines where to start numbering the suffix.  Typically index offset is 0 or 1.
 *
 * For example, to create points like:
 *	default:myarray_0
 *		use .MonitorArray("default:myarray", "_%d", 0)
 *	default:myarray_001
 *		use .MonitorArray("default:myarray", "_%03d", 1)
 *	default:myarray[1]
 *		use .MonitorArray("default:myarray", "[%d]", 1)
 *
 * This script will automatically respond to changes in the size of the array by
 * creating new points as the array expands, or by marking existing points as
 * not connected as the array contracts.
 */

require ("Application");

class BreakArray Application
{
}

/* Write the 'main line' of the program here.  You should only need to modify the constructor
 * to match your data points. */
method BreakArray.constructor ()
{
	// Delete the calls to .setupTest.  They are just here to test this script.
	.setupTest ("default:myarray", [ 1, 2, 3, 4, 5 ]);
	.setupTest ("default:myarray2", [ 1, 2, 3, 4, 5 ]);

	// Add, remove or modify .MonitorArray calls to work with any number of array points.
	.MonitorArray("default:myarray", "_%03d", 1);
	.MonitorArray("default:myarray2", "[%d]", 0);
}

/* --------------------------------------------------------------------------------- */

// You should not need to modify below this point.

/*
 * Create data points for an array using an index and a format string.
 * For example, to create points:
 *	default:myarray_0
 *		use suffixformat="_%d", indexoffset=0
 *	default:myarray[1]
 *		use suffixformat="[%d]", indexoffset=1
 *	default:myarray_001
 *		use suffixformat="_%03d", indexoffset=1
 */
method BreakArray.MonitorArray(pointname, suffixformat, indexoffset)
{
	local	value;
	.OnChange(symbol(pointname), `(@self).Break(this, value, @suffixformat, @indexoffset, t));

	// If we have a current value, break the array for the first time now.
	if (!undefined_p(value = eval(symbol(pointname))))
	{
		.Break(symbol(pointname), value, suffixformat, indexoffset, nil);
	}
}

method BreakArray.Break(pointname, value, suffixformat, indexoffset, have_previous?=nil)
{
	local	elementname, elementsym, suffix;
	local	indx = indexoffset;
	local	info = PointMetadata(pointname), elementinfo;
	local	type, curlen, prevlen, i;

	//princ(info, "\n");

	if (array_p(value))
	{
		// Find the element type
		type = info.canonical_type & ~0xfffff000;

		// For each value in the array, create a point name for it.  If the point
		// does not exist, or has an empty canonical type, then create the point and
		// set its canonical type to the type of the parent array.
		with element in value do
		{
			suffix = format(suffixformat, indx);
			elementname = string(pointname, suffix);
			elementsym = symbol(elementname);
			elementinfo = PointMetadata(elementsym);
			if (!elementinfo || elementinfo.canonical_type == 0)
			{
				// This point has never been created in the DataHub
				// Create it and match its canonical type to the array type.
				datahub_command (format("(create %s 1)", stringc(elementname)), 1);
				datahub_command (format("(set_canonical %s %d 1)", stringc(elementname), type), 1);
			}
			datahub_write(elementname, element, nil, info.quality, info.timestamp);
			indx++;
		}
	}

	// Find the previous length.  If the array used to be longer then we should
	// mark the values that are no longer present as Not Connected.  If this function
	// is called from within a change handler previous is implicitly defined.

	if (have_previous)
	{
		prevlen = (undefined_p(previous) || !array_p(previous)) ? 0 : length(previous);
		curlen = (array_p(value) ? length(value) : 0);

		for (i=curlen; i<prevlen; i++)
		{
			suffix = format(suffixformat, indx);
			elementname = string(pointname, suffix);
			elementsym = symbol(elementname);
			elementinfo = PointMetadata(elementsym);
			if (elementinfo)
			{
				//princ ("Set array element ", elementname, " as not connected\n");
				datahub_write(elementname, elementinfo.value, nil, OPC_QUALITY_NOT_CONNECTED, info.timestamp);
			}
			indx++;
		}
	}
}

/*
 * This method is just used to create a test data set.
 */
method BreakArray.setupTest(pointname, value)
{
	datahub_command (format("(create %s 1)", stringc(pointname)), 1);
	datahub_command (format("(set_canonical %s \"R8 array\" 1)", stringc(pointname)), 1);
	datahub_write (pointname, value);
}

/* Any code to be run when the program gets shut down. */
method BreakArray.destructor ()
{
}

/* Start the program by instantiating the class. */
ApplicationSingleton (BreakArray);

On September 13th there will be a summit in Tokyo among the leading proponents and users of the Cogent DataHub.  Hosted jointly by Cogent and Nissin Systems, Inc., and supported by VEC (Virtual Engineering Company), this summit will provide an opportunity for key players in IT, M2M, and cloud services to discuss and collaborate on the concept of M2C (machine-to-cloud) systems, and how to leverage this emerging technology in their particular areas of expertise.

Cogent DataHub SummitThe keynote speech, “Cloud Computing and System Security” will be delivered by the Executive Director of VEC, Mr. Masashi Murakami, while an overview of Cogent’s technology and vision, “Cogent DataHub Today & Tomorrow” will be presented by Mr. Andrew Thomas, President of Cogent Real-Time Systems. Following these will be an opportunity for those directly involved in M2C projects to share their experience and insights.

This summit is to be held just one week after a special meeting of VEC to discuss these and similar topics.  A joint presentation, Remote Monitoring Using Cloud Technology, will be given at that meeting by Mr. Thomas and Mr. Mickey Yamazaki, Marketing Adviser to Cogent.  Appealing to a growing ground swell of interest in M2M and cloud services, these two events are expected to underline the progress already being a achieved in the new area of M2C (machine-to-cloud) technology, as well as provide a vision for its future.

Attendance at these two events is limited, but we plan to report on the highlights of both of them in upcoming blogs, so stay tuned.

As I mentioned in a previous blog, the best way to network OPC is by using OPC tunnelling.  This approach eliminates the need for DCOM, and allows the data to cross the network by TCP.  But as I also said, there are different approaches to OPC tunnelling, and all are not equal.  Some give you much better performance when things go wrong. OPC-connection-drop

For example, what happens when a connection drops?  What if the network goes down, or somebody pulls out a cable, or you get intermittent wireless interference?  What happens next all depends on how your OPC tunneller works.

Problem
Most tunnel products simply pass the OPC requests and replies across the network, so that each message from an OPC client travels the entire distance to the OPC server, and each response travels the entire way back.  When the connection drops, the logical thing happens: the OPC client gets disconnected from the OPC server.  This means that the OPC client either goes into its recovery and reconnect cycle, or an operator needs to step in.

Solution
The solution is to implement an OPC/TCP protocol converter at each end of the tunnel that talks OPC to the server or client on the one hand, and maintains a mirrored data set across the network via TCP, on the other. This way, when the network goes down, sure, the data updates stop.  But the connections to the OPC client and to the OPC server are not lost.  The last known values from the OPC server remain intact, and no reconnect cycle or operator interaction is necessary.  As soon as the network comes back up, the data immediately starts to update again, with no need to negotiate an OPC reconnect.  This is just one of the important features that can mean the difference between satisfaction and disappointment with the OPC tunnelling solution that you choose.

We encourage anyone who needs to network OPC to read our whitepaper “OPC Tunnelling – Know Your Options” and watch the video below.

Are you looking for a way to log data from an OPC server?  Maybe you’ve recently installed new OPC capability.  Or perhaps you need to do something beyond the range of your SCADA system.  In any case, the process data available through OPC can be highly valuable.  More and more companies are scrambling to tap into this freely available resource, which can serve as a window into problems and opportunities hidden in the system.  How to maximize the benefits of logging this data?  A key factor that often gets overlooked is flexibility.Flexibile Database

There are a number of ways to connect an OPC server to a database, but many of them are not very flexible.  For example, built-in SCADA systems generally allow data logging to only their proprietary database.  For those systems, and many stand-alone systems, a user is restricted to a specific data table format.  And in many cases, there are limits on what triggers the action of writing to the database.  This lack of flexibility can mean that you don’t really gain the full value of your OPC data.

To start with, a truly flexible OPC data logging solution will allow you to log to whatever database you want.  You should be able to choose between SQL Server, Oracle, MySQL, FileMaker, or any other ODBC-compliant database.  At the same time, you should also be able to log to the database used by your SCADA system, if necessary.

What’s more, you should have the option to log to an existing table in the database, or to create any type of table you like.  Most data logging products force you to use only the table design that they specify.  But why should you have to follow their generic table format?  Rather than redesign your system to fit it into a compromise “typical configuration” conceived by a product designer with no knowledge of your specific needs, why not use a flexible tool that can log data to the tables that are already in your database, or even create its own?

Finally, consider how the data gets logged.  A highly flexible OPC logging tool will give you complete freedom to assign triggers for logging based on point change, timers, or any other requirement.  It will let you set conditions, so that when a trigger does fire, the logging won’t happen unless all the specified requirements are met.  Some data loggers even offer the option to deadband values, so that data only gets written when a value crosses a certain threshold of significance.

These are just some of the things to look for when considering an OPC data logging tool.  There are more, such as the capacity to store data when the database is not available and then log it later, as well as the ability to query a database and write that data back to the OPC server.  We’ll talk about those in future blogs.

DataHub WebView running in five minutes
We’ve got a video that shows how to get DataHub WebView up and running in 5 minutes.

The video takes you through the DataHub installation, connecting to an OPC server, configuring and launching DataHub WebView, editing a page with 3 different controls to display live values from 3 different OPC tags, then saving the page and viewing it locally and on the web.  All in just 5 minutes.

Why the emphasis on 5 minutes?  What’s the big deal?  Well, anyone who has worked with an HMI builder will tell you that’s almost unheard of.  Typically just installing and configuring takes longer than that.  Then you need to set up a programming environment.  But the real measure of speed is how quickly someone can create and publish a page.

Like the Cogent DataHub itself, a design goal of DataHub WebView was to make it powerful and robust, and yet easy to use, quick.  The point-and-click interface should appeal to both novice and expert alike.  This ease of use translates into speed for the page designer, which means lower development costs.  When a new piece of equipment gets added to your system, or you need to monitor another process, you can edit your HMI pages to suit, and clients simply need to reload them to see the updates.

Don’t believe me?  See for yourself.  Check out the video, or better yet, download the Cogent DataHub and try out DataHub WebView.  I think you’ll be pleasantly surprised.