swms.de https://www.swms.de/en/blog/ Thu, 20 Feb 2025 04:29:22 +0000 de-DE hourly 1 Easy and Flexible: Knowledge Based Engineering in Product Development https://www.swms.de/en/blog/easy-and-flexible-knowledge-based-engineering-in-product-development/ https://www.swms.de/en/blog/easy-and-flexible-knowledge-based-engineering-in-product-development/#comments Tue, 17 Dec 2019 16:01:16 +0000 plm https://www.swms.de/en/blog/easy-and-flexible-knowledge-based-engineering-in-product-development/ Weiterlesen

]]>
On the way to development 4.0


Enter the race with Knowledge Based Engineering

You might think the industry is in an eternal race. In a race in which the time to market of a product embodies the lap time and product quality is the decisive factor in staying in the race - in competition. The company that develops, produces and launches the best product the fastest is ahead.

As in motor sports, there are also innovation thrusts around product development. Digitalization is one of the most important developments and is in full swing. In this context, Knowledge Based Engineering (KBE) plays a key role in product development. Some introductory and reflective thoughts on this topic have already been highlighted. Now the focus is on approaches and possible applications of the KBE and how these can be implemented.

SWMS is an experienced and reliable partner at your side. We make a pit stop and prepare your car for the coming race laps. 

Start your Engines

The knowledge of a company represents a high value. Used correctly, it can make the difference between success and failure in the market.

KBE, or knowledge-based development, is a branch of Knowledge Management. This pursues the strategy of capturing, structuring and formalising existing and newly emerging knowledge, making it available in a targeted manner and applying it automatically as required. In the KBE this principle is applied to the development process of technical products. The implementation is essentially carried out by software modules and extensions in the CAD and PLM environment, which take over informative, assisting and automating tasks.

Select a racing route

Anyone who decides to want to be at the forefront of the competition will find it difficult to avoid the introduction of KBE in product development. Particularly in the case of repetitive design steps or variant design, KBE enables significant time savings, error reduction and an increase in product quality.

But where to begin?

Which KBE approaches and which applications are the right ones? Which processes should you start with first and how?

We at SWMS have a wide range of experience and knowledge related to the individualized planning, development, implementation, integration and support of KBE software. We can support you competently and solution-oriented. In the next section follows an example for an easy and flexible entry into the world of KBE, which promises quick success and savings.

The first rounds - Easy to do

Technical knowledge can be stored in a company in various forms and places - in digital and printed documents, in design models and drawings, as internal standards and guidelines, as tables or as knowledge and experience of individual employees. Capturing this is an important, though time-consuming and on-going process. In order to meet the associated time and financial challenges, it is helpful to divide the process of knowledge acquisition and preparation into manageable phases and steps.


KBE in der Produktentwicklung


One way to convert small and smallest "knowledge packages" directly into usable KBE applications is to develop design tests. Such "checks" can be easily integrated into current CAD programs and make it possible to verify compliance with any design rules on a product.

This approach offers numerous advantages, especially for the initial implementation of KBE. Formulating and formalizing individual design rules is already the first step in the knowledge preparation process. The individual rules themselves represent small, bundled knowledge packages. They can be created in any order and according to urgency or relevance.

The checks enable the knowledge packages to be used directly by end users in CAD. Errors are detected at an early stage and production delays avoided. Each individual inspection results in clearly measurable improvements after only a short time.

At the same time, the development of design rules results in a constantly growing pool of knowledge on the basis of which further KBE applications can be introduced, e.g:

  • the integration of automatic repair and correction mechanisms,
  • the introduction of construction templates,
  • the automation of individual design steps
  • to a knowledge-based design assistant for the semi-automated and assisted design of entire assemblies.

Checks integrated in CAD represent an uncomplicated and flexible way of entering the world of KBE. They promise immediate and clear effects as well as fast amortisation.


SWMS unterstützt bei KBE-Anwendungen


The way is the goal

KBE helps to be at the top in the race of the competition. The technical possibilities are constantly evolving. We at SWMS support you in the introduction, individualization and maintenance of KBE applications in your company.

In this way, we prepare you in the best possible way for the future race laps - with KBE and beyond.
]]>
https://www.swms.de/en/blog/easy-and-flexible-knowledge-based-engineering-in-product-development/feed/ 0 https://www.swms.de/favicon.ico https://www.swms.de/favicon.ico
Modular Software Development with NuGet https://www.swms.de/en/blog/modular-software-development-with-nuget/ https://www.swms.de/en/blog/modular-software-development-with-nuget/#comments Tue, 17 Dec 2019 15:51:31 +0000 software-development https://www.swms.de/en/blog/modular-software-development-with-nuget/ Weiterlesen

]]>
What makes modular software development so special?

Modular software development refers to the separation of a software project into individual, logically separated components. This modularity leads to the fact that the individually available software components can be developed separately from each other and then (re)used. As already mentioned in the blog post on unit testing, this has the great advantage that the individual modules can be tested separately. Software developers can use already finished components without (partially) redeveloping them.

This blog post will discuss how software packages are created and delivered.

One of the most common ways to realize this for .NET software development is the software package management system NuGet. NuGet is a free system developed by Microsoft that allows you to obtain and integrate software packages from data sources (repositories).

What are NuGet packages?

NuGet packages are equivalent to .ZIP files whose contents match a certain format. In .NET software development, these are usually .DLL files and a few metadata (authors, descriptions, etc.) as well as relations to other NuGet packages, which are automatically resolved if they exist. These files can be easily integrated, versioned and used in projects through the direct support of Visual Studio.

How do I use NuGet packages?

The following example shows how to install a NuGet package within a .NET project using Visual Studio 2017 as an example.

  1. Different sources for packages can be added under Tools -> Options -> NuGet Package Manager -> Package Sources. For example, the default source is "https://api.nuget.org/v3/index.json" and will list packages from nuget.org, one of the largest repositories.
  2. URLs or folders can be used as sources for NuGet packages. For example, network drives can be used to provide packets on the local network.
  3. In the respective project, packages from the respective sources can now be added via Project -> Manage NuGet Packages -> Browse -> Install NuGet Packages.
  4. The new data can now be referenced in the project's program code.

This describes the entire process that a user must go through to use new NuGet packages. But if we want to provide NuGet packages ourselves to share with the world (or just colleagues), we need to create them and then make them available through public or private sources.

How do I create NuGet packages?

To create NuGet packages it is necessary to download the NuGet Package Manager. At "https://www.nuget.org/downloads" you will find download links for the nuget.exe. This must be stored on the computer and the directory in which the file is stored must be entered in the environment variable "Path".

Erstellung von NuGet Paketen


The .nuspec file

To create a NuGet package, a .nuspec file (NuGet Specifications) is required. This contains metadata and further information about files in the package. In addition to information about the title, author, copyright, etc., other dependencies and files to be used are also listed. To create a .nuspec file, you can navigate to the directory of the project (.vbproj, .csproj) via command line and execute the command "nuget spec". Alternatively, this can be done with the same command via the "Package Manager Console" which is integrated in Visual Studio 2017. This command creates a .nuspec file with the same name as the project.

An example of a slightly customized .nuspec file looks like this:

<?xml version=”1.0″? > <package> <metadata> <id> $id$ </id>> <version> $version$ </version> <title> $title$ </title> <authors> $author$ </authors> <owners> $author$ </owners> <requireLicenseAcceptance> false </requireLicenseAcceptance> <description> $description$ </description> <releaseNotes> $releaseNotes$ </releaseNotes> <copyright> Copyright 2018 </copyright> <tags> SWMS </tags> </metadata> </package>

The $ characters indicate that these are variables which will be partially replaced automatically in the further process.

The .nupkg file

The command "nuget pack DateiName.vbproj" creates a NuGet package from the selected VB project by using the previously created .nuspec file. For C# projects, the file extension is "csproj". The command "nuget pack" automatically replaces certain placeholders (e.g. <title>$title$</title>) with information of the respective project. Alternatively, the "nuget pack" command can also be applied directly to the .nuspec file. The placeholders will not be replaced automatically. Furthermore, it is possible that the respective .dll files, which can be found under "\bin\Release\" for example, are not found. These have to be added manually in the .nuspec file.

How do I provide NuGet packages?

The provision of packages can be realized in several ways. For example, it is possible to place the NuGet packages in a folder on the local network and define it as the source for obtaining packets. It is also possible, for example, to upload directly to the "nuget.org" data source by uploading the NuGet package under "Upload". However, packages uploaded there are then available to any other developer as they are publicly available.

For internal use, it is possible to set up private servers for provisioning.

A very comfortable way to provide NuGet packages to just a few is to use Visual Studio Team Services (VSTS). There it is very easy to set up so-called feeds, which can be used as NuGet sources. In the near future there will be an update at this point, which will discuss the requirements and the implementation using VSTS.

Conclusion

NuGet packages are a very comfortable and efficient way to manage and exchange software components.

This blog post should serve as a short introduction to software development with NuGet. The purpose was explained and how to use, create and publish NuGet packages.

Good luck with your projects!

]]>
https://www.swms.de/en/blog/modular-software-development-with-nuget/feed/ 0 https://www.swms.de/favicon.ico https://www.swms.de/favicon.ico
More Space for Ideas - Automated Design https://www.swms.de/en/blog/more-space-for-ideas-automate-design/ https://www.swms.de/en/blog/more-space-for-ideas-automate-design/#comments Tue, 17 Dec 2019 15:43:00 +0000 plm https://www.swms.de/en/blog/more-space-for-ideas-automate-design/ Weiterlesen

]]>
The Sword of Damocles of Digitalization

"Digitalization" and - in the technical field - "Industry 4.0" are omnipresent and much discussed. To some, they may already seem trite, but there are also many debated concerns about the future world of work and the preservation of one's own job.

In almost all areas, a wide variety of processes can be automated today and the technical possibilities are constantly growing. The job description of an engineer is also affected by this, so that current developments can cause not only enthusiasm, but also worries and fears about the future.

Many of our customers and most of the employees at SWMS belong to this professional group. Therefore, this article addresses these concerns. It deals with current projects and thoughts around design and its automation as part of digitalization.

Ingenieur KBE

Competition or Symbiosis - Engineers vs. KBE

The automation of design steps and processes is embedded in the principle of knowledge-based design, also known as Knowledge Based Engineering (KBE). SWMS has a broad, well-founded spectrum of knowledge and experience in this field. Among other things, we supervise projects such as

  • the automated generation of specific and standardized design elements or details,
  • automatic testing and correction of designs according to defined specifications,
  • the automatic creation of individualized parts lists, individual part or assembly drawings right up to,
  • the automatic generation of entire assembly designs using specially tailored tools,
  • which are intelligently embedded in the individual CAx or PLM environment of our customers.

Tasks of this kind are usually still performed manually, although often with the aid of digital technology. This naturally raises the question: "Where in automated design is there still room for the engineer? The answer can possibly be a closer look and a new reflection on the core of this profession.

The term "engineer" can be traced back to the Latin "ingenium", which can be translated as "ingenious invention", "clever idea" or "astuteness". In the field of development and construction, the central task of an engineer is to find a feasible, economical and practical technical solution to a known problem, which fulfils defined requirements and guidelines to the best possible extent.

Creativity is needed!

Developers have always had access to various technical tools that make their work easier. Starting with the drawing board and slide rule, to the first computers, CAD and simulation technology, modern KBE applications are part of the latest and most far-reaching development stage of such tools. But they are no substitute for engineers.

Let's take a closer look at the development and design process as it is. It starts with a technical problem for which a solution is needed. The number of all theoretically possible solutions for such a problem is initially confusingly large. It is limited by technical guidelines and the case-specific limits of technical feasibility. Requirements for the solution as well as development procedures and methods point the way in the selection of suitable approaches and their elaboration.


KBE Entwicklung


During product development, great uncertainties as well as dynamic, abstract, qualitative and sometimes elusive requirements and constraints have to be dealt with. In addition, a constantly increasing complexity and networking of technical products and systems can be observed. This requires a high level of intuition, creativity, flexibility as well as extensive technical knowledge, experience and communication skills of the engineers. And this makes them irreplaceable in their profession, even in the long term. At the same time, the further expansion of information technology tools for design is only sensible in view of these challenges.

KBE applications therefore represent important tools for taking over certain tasks, such as standard designs, geometry generation and product analysis, validation and documentation in the design process. If such standard processes, which often have to be repeated, are automated, this does not replace engineers. Rather, they are relieved where routine tasks arise and are given more room for tasks that require creativity and inventiveness.

For development companies, KBE applications offer the opportunity to shorten development times, increase product quality and the degree of standardization, and avoid errors earlier and more reliably. At the same time, the creative potential of the employed engineers can be used better and more specifically, which ultimately enables an increase in the degree of innovation.


On the Way to "Construction 4.0"

The conceivable and real spectrum of KBE applications is constantly growing. New approaches from the field of artificial intelligence or simulation as well as further increasing computing capacities contribute to this. It will therefore continue to be observed how such innovations affect the profession of engineer.

As long as KBE is understood as an aid in construction, developed and used carefully and gives the engineers more freedom for creative processes, the advantages mentioned above are reinforced. The profession of "engineer" may even be further sharpened or reshaped, but certainly not threatened.

Another article will be published shortly on the exciting and current topic of the KBE, in which individual aspects, current approaches and new possibilities will be discussed in more detail.

]]>
https://www.swms.de/en/blog/more-space-for-ideas-automate-design/feed/ 0 https://www.swms.de/favicon.ico https://www.swms.de/favicon.ico
Lean, Agile, Innovative. Development at SWMS https://www.swms.de/en/blog/lean-agile-innovative-development-at-swms/ https://www.swms.de/en/blog/lean-agile-innovative-development-at-swms/#comments Tue, 17 Dec 2019 15:36:30 +0000 software-development https://www.swms.de/en/blog/lean-agile-innovative-development-at-swms/ Weiterlesen

]]>
Unpopular Onlooker: The Risk

At some point we are all customers and none of us likes to buy a pig in a poke. The bigger the investment - for example an individual software solution - the less risk you want to take. Unfortunately, this risk seems to increase with the size of the project. This also applies to the contractor, depending on the contract.

Frequently, fixed budgets for a fixed scope of delivery are agreed in this way. Although this provides clarity about sums and deadlines, what happens until the project is completed, the laughter about some project cartoons gets stuck in your throat. In the best case scenario, the effort at completion was close to the actual project costs.

Agility. More than a Trend Word for SWMS

As companies grow, so does the need for organization. At first glance, predefined processes and hierarchies ensure transparency at the expense of flexibility and creativity. The renunciation of defined structures results in a loss of control and efficiency. The agile approach tries to create control through transparency without radically overturning grown structures and proven processes.

After SWMS 2015 was looking for a more flexible alternative to Scrum, we came across IT Kanban by David J. Anderson, whose evolutionary approach we liked very much.

Kanban can be applied to any process without having to change it initially. This was an important point for SWMS to take the employees with them. The philosophy behind Kanban and how it works in detail would go beyond the scope of this article, but they are definitely worth reading (see "More about this topic")! Without this knowledge it will be difficult to implement Kanban in a target-oriented way and to evaluate its success later.

ursprüngliches Kanban-Board bei SWMS

Fig. 1: The Original Kanban at SWMS

The introduction to the topic and methodology was accompanied by the Agile trainer Holger Koschek. A cross-project Kanban board was set up, a WiP ( Work in Progress) limit was set and a date for the Daily Standup Meeting was found. Furthermore, a set of rules was created from parts of the Kanban method portfolio that matched our processes. Three years have passed since then and it is time for a small conclusion:


Kanban at SWMS - A Success Story?

The superficial truth is: The Daily Standup is still taking place consistently. However, the processes in the company are still quite heterogeneous and the Kanban board is no longer one.

Did Kanban fail at SWMS?

Doubts about the success of Kanban at SWMS are also discussed internally: In the daily meetings with rounds of often more than 20 employees, the Kanban board was soon literally out of sight. The benefit of knowing what is happening in other projects is also not always clear at first glance. However, ideas for solutions are repeatedly ignited and the resulting knowledge about bottlenecks, processes and applied methods in the company is undisputedly a great added value.

If you also look at the working methods within the teams, we also find that all teams now work with pull-based ticket systems that are more or less similar to the Kanban. The standups have been extended to include project-internal meetings, whereby this increasingly also includes the client. The fact that the coffee reserves are also marked with a "signal card" - the German translation for Kanban - has more than an anecdotal character.


More transparency, more networking, more "We"

In this way, SWMS Kanban has spread almost secretly within the company. And what's more, in the spirit of Taiichi Ohno, the developer of the Toyota production system and also the origin of Kanban, SWMS integrates its customers into its internal task management if required:

A tried and tested variant of project management has also been established from this: Customers order individual requirements (tasks) on a "Kanban board" (e.g. Microsoft Planner, Trello, Excel-Online). Here the customer can prioritize these according to his wishes and also learns which tasks have already been processed or completed. This procedure guarantees customers a high degree of transparency, services that meet their requirements and calculable cost development. Regular partial deliveries in the form of betas, individual software modules or features allow the customer early control. At the same time, they act as feedback for SWMS in order to work efficiently.

During a daily, approximately ten-minute meeting of all employees, current tasks and problems are communicated in order to call up interdisciplinary solution ideas and to draw on tasks with expert knowledge. Afterwards, needs are deepened in a small group. The Daily Meeting provides colleagues with an overview of company activities, project managers with workloads and priorities. Empowerment and team building are also welcome side effects.

In short: Whether our method is still "Kanban", remains to be seen. However, the introduction has developed into a great success at SWMS.

 

Effect of our Lean Development in Key Points:

  • Short Response Times
  • Earlier Project Start
  • Higher Delivery Quality due to Regular
    • Adaptation of the Feature Set
    • Communication
    • Prioritisation
  • Empowerment of Employees
  • Transparency within the Company
  • Transparency of Order Status
  • Sense of Togetherness

More about this topic:

leankanban.com
heise.de/thema/Projektmanagement

Anderson, David J.: Kanban. Evolutionary Change Management for IT Organizations
Epping, Thomas: Kanban for Software Development
Ohno, Taiichi: The Toyota Production System

 

Title photo credits: https://pixabay.com/de/problem-l%C3%B6sung-hilfe-support-3303396/

]]>
https://www.swms.de/en/blog/lean-agile-innovative-development-at-swms/feed/ 0 https://www.swms.de/favicon.ico https://www.swms.de/favicon.ico
Automated Testing with Entity Framework 6 https://www.swms.de/en/blog/automated-testing-with-entity-framework-6/ https://www.swms.de/en/blog/automated-testing-with-entity-framework-6/#comments Tue, 17 Dec 2019 15:32:43 +0000 software-development https://www.swms.de/en/blog/automated-testing-with-entity-framework-6/ Weiterlesen

]]>
In software development, testing applications and troubleshooting takes up a large part of the time. If the modules of a software are clearly separated by loose coupling of dependencies, the software can be tested in isolation using Dependency Injection and Mock objects. However, if external resources are used by the software, such as USB devices or databases, automated testing becomes more difficult. So this blog post is about testing a database connection with Entity Framework 6.

A simple ASP.NET MVC application

Fig. 1 shows the rough software architecture of a simple ASP.NET MVC App. The app consists of several layers with their respective defined task areas. Each of the four layers only knows the layer above it and can only access the functionalities of this layer (symbolized by the direction of the arrows).

The Controller Layer receives web requests, validates input data, authorizes users, intercepts service layer exceptions, and generates error messages for the user.

The Service Layer is responsible for the actual app logic and uses the Repository Layer to query, store and edit records.

The Repository layer encapsulates the access to a database and abstracts the type of database (SQL- or document-based database) as well as the access to the database (e.g. via an ORM like EF6 or NHibernate).


ASP.NET App Softwarearchitektur

Fig. 1: ASP.NET App Software Architecture 


This article is about testing the repository layers that access an SQL database through Entity Framework 6. Since we also want to implement complex queries using the Entity Framework, it is helpful to be able to test them automatically. The numerous advantages of unit tests are already discussed in detail in another blog post, which is why it will not be repeated here again. Strictly speaking, our repository layer tests are not unit tests, but integration tests because we access a database and test more than one layer of our software at a time.

A simple Test

Fig. 2 shows a very simple test, divided into the three areas Arrange, Act and Assert. MSTest is used here as the test framework, which can be seen from the attribute decorations of the [TestClass] class and the [TestMethod] method. In the first section Arrange all necessary preconditions for the test are generated - the creation and saving of a new ApplicationUser (lines 28 and 29).

Note: In order to display the logic of the test transparently, the repository layer is omitted here and the DbContext of the Entity Framework is accessed directly. When using the repository layer, the DbContext would be injected into this layer as a dependency.

The code to be tested is located in lines 32 and 40. An attempt is made to add and save a new document. However, the new document does not comply with the rules of the database because the required FilePath and Content properties are not set. Therefore, we expect an exception to be displayed when saving the changes. So here we test the schema of the database and the desired response to inserting incorrect data.

Note: The NuGet Package FluentAssertions is used here for the asserts.  


Ein simpler Code-Test Fig. 2: A simple Test


The Test runs - Only once

The test in Fig. 2 is executable and the check mark between lines 18 and 19 symbolizes that the test was successful. Unfortunately, in this case the test is only successful once. Another test run leads to an erroneous result, since an exception is generated from the database in line 29. This happens because the "TestUser" was already added to the database during the previous test run and the UserName is provided with a unique index (see Fig. 3).


Datenbank im SQL Server Management Studio

Fig. 3: Table of the database in SQL Server Management Studio


By executing our tests, the database is manipulated so that it is no longer in a defined state when a new test run is started. The database should therefore be completely empty at the beginning of each test.


Always with empty Database

In order for several competing tests to run simultaneously, data records should never be persistently stored in the database. This desired behavior can be achieved by running all database operations of a test in one transaction. At the end of the test, the transaction is rolled back so that none of the changes are visible outside the transaction. In order to keep the complexity of the individual tests as low as possible, we create a simple base class (Fig. 4) and inherit our test classes from it.


Basisklasse für Tests mit Entity Framework Fig. 4: Base Class for Tests with the Entity Framework


By inheriting from the base class, a TransactionScope object is created before each individual test run, so that all operations run through the Entity Framework in a transaction that is never persisted. In addition, two DbContext objects are created for testing and generating the required test scenario. Of course, further DbContext instances can also be used in the tests.

Create a Database

In order for the DbContext in our test project to be able to connect to a database at all, the connection string must still be configured in the App.config file (Fig. 5) of the test project. The LocalDB is completely sufficient for the automated tests. If the database is not yet available on the test system, the test project must ensure that the database is created and migrated to the current schema. This would of course be possible as a manual step via the PMC (Package Manager Console), but would mean that the tests would not run successfully with Continuous Integration on an agent.


Connection String für LocalDB Fig. 5: Connection String for LocalDB


To ensure that the local test database always schematically corresponds to the status of the Entity Framework, an additional class is introduced (see Fig. 6). The class contains only one method, which is executed once before each run of the test runner (method decorated with Assembly Initialize). In this method, our migrations from the Entity Framework are applied to the database.


Erstellen und Migrieren der Datenbank Fig. 6: Creating and Migrating of the Database 


Conclusion

Our tests are running against an empty local database. So we can automatically test the behavior of our software, based on our database schema, via the Entity Framework. The tests are also executable in Continous Integration solutions and can be executed - for example with the "Hosted VS2017" Agent of the Visual Studio Team Services - successfully.

The automated testing of applications saves a lot of time, as the effort for manual test procedures is reduced. In addition, errors that occur, for example, when entering new changes, can be detected at an early stage.


Title Photo Credit: https://pixabay.com/de/chemie-lehrer-wissenschaft-labor-1027781/

]]>
https://www.swms.de/en/blog/automated-testing-with-entity-framework-6/feed/ 0 https://www.swms.de/favicon.ico https://www.swms.de/favicon.ico
The CAD-CAM Process Chain - Reduce Media Breaks, Lean to Result (Part 2) https://www.swms.de/en/blog/the-cad-cam-process-chain-reduce-media-breaks-lean-to-result-part-2/ https://www.swms.de/en/blog/the-cad-cam-process-chain-reduce-media-breaks-lean-to-result-part-2/#comments Tue, 17 Dec 2019 15:08:00 +0000 plm https://www.swms.de/en/blog/the-cad-cam-process-chain-reduce-media-breaks-lean-to-result-part-2/ Weiterlesen

]]>

This text follows on from the previous article "The CAD-CAM Process Chain: The Status Quo".

An important prerequisite for the optimization of the CAD-CAM process chain is the avoidance of media breaks and the associated loss of information.

Media breaks occur when different, unconnected, often even incompatible systems are used for different tasks in the product development process. 


puzzle 1020055 1920 1

This does not mean that only systems "from one casting" are suitable for modern production. Coordinated systems and, above all, the underlying processes also offer many possibilities.

First we consider the problem of information loss and its origin. 


Why does information get lost on the way from one system to another?


The example from the previous article, in which the classic production drawing is interposed between design and production, is taken up here. The most obvious loss of information here is the loss of 3D representation.

The three-dimensional CAD model of a workpiece is displayed on a two-dimensional drawing and then recreated as a three-dimensional product.

A contradiction that is almost too obvious to be recognized.

In practice, an NC program is often created online on the basis of a drawing on the machine tool. However, this program is displayed three-dimensionally by the machine within the scope of the existing possibilities in order to simplify orientation on the workpiece for the machine operator. From this point of view, the workpiece is modelled twice.

If you stick to this example, production-relevant information is often lost on the way through the drawing. Of course, this should no longer happen after many decades of standardization of technical drawings including production drawings.

The process of creating the drawing is, as a rule, still partly manual and therefore potentially faulty.

If surface information, tolerances and similar are not explicitly stated in the drawing, they are no longer available in the production process, with the result that parts are initially unusable.

Cad Bild 1


The loss of information between two systems using an example


But even without the involvement of the classic paper drawing, information is often lost.

This can be seen very clearly in the example of two independent systems involved: System A is the CAD system, system B the CAM system. The exchange between the two systems takes place via classic generic exchange formats. These can be for example step, Iges, 3dXML or JT.

Between the systems there is usually a purely geometric exchange of information. This means that bodies and surfaces of the workpiece are transferable. In addition, general attributes or separately imported PMIs (Product Manufacturing Information) can often be transferred. Usually, these attributes have to be entered manually by the designer.


But what kind of difficulties result from this?


In addition to some mathematical inaccuracies, which are initially neglected here, that occur when using these formats, much information is nevertheless lost. This type of information transfer turns a bore into a cylindrical area in the part where no material may later be present.

For the NC programmer in the CAM system, this does not initially determine whether it is a functional bore, a circular pocket or even a burnt-out section.

In the sense of manufacturing tolerances, for example, this is a crucial piece of information.

These problems are solved in daily practice by, as a rule, individual agreements and standards. For example, Colour codes on surfaces, supplementary descriptions in enclosed documents or additional production drawings are used. 

A further disadvantage is that features such as feature-based programming on the CAM side are usually not possible with neutral data formats. This is due to the above mentioned lack of information about the concrete design of the geometry. This is associated with the fact that efficiency in the CAM area is lost, which naturally also makes the overall process less efficient. It is therefore clear that a pure data exchange does not mean a full exchange of information. 

So what should be the main focus when designing a CAD-CAM process chain?

Important for the correct planning and implementation of a CAx landscape, especially in the CAD-CAM area, as already mentioned, is the avoidance of information loss. Roughly speaking, every piece of information that has been entered into a model should also be available for the rest of the process without being converted, moved, or duplicated again by external intervention. On the one hand, these processes mean additional effort, on the other hand they increase the error potential and therefore also the necessary rework and corrections.

matrix 1027571 1920 1


This ideal state is certainly not achievable everywhere, but it should be aimed at when designing CAx systems, especially for new acquisitions or major changes.

It is therefore imperative to first record the necessary level of information for each individual process step, such as design, NC programming, work preparation or fixture construction, and to define the information to be transmitted at the interfaces. A suitable system landscape can then be created. 

If, for individual reasons, this landscape is to consist of several incompatible individual systems, one of the most important tasks is to find suitable possibilities for the transfer of information. For the reasons already discussed, these cannot consist of the known possibilities for the exchange of CAx data in neutral formats, but should in any case also have an automated transfer of further relevant information. 

Large manufacturers of CAx systems have already recognized this necessity for a long time and rely on a seamless cooperation of the modules in their respective systems.

Here it can often be trusted that almost all available information is also available for all process steps. The question of information exchange should always play a role when weighing up such holistic systems - which certainly also have their weaknesses - against a landscape of selectively ideal systems for the respective process step.


]]>
https://www.swms.de/en/blog/the-cad-cam-process-chain-reduce-media-breaks-lean-to-result-part-2/feed/ 0 https://www.swms.de/favicon.ico https://www.swms.de/favicon.ico
With "If This Then That" to the Use Case for the Internet of Things https://www.swms.de/en/blog/with-if-this-then-that-to-the-use-case-for-the-internet-of-things/ https://www.swms.de/en/blog/with-if-this-then-that-to-the-use-case-for-the-internet-of-things/#comments Tue, 03 Dec 2019 10:24:00 +0000 consulting https://www.swms.de/en/blog/with-if-this-then-that-to-the-use-case-for-the-internet-of-things/ Weiterlesen

]]>
Challenge to determine the use case of IoT

The Internet of Things is characterized by the technical connection of devices and systems to the Internet. Although this connection is the starting point of many IoT projects, in most cases it can be realized with relatively little effort after careful consideration of the available options.

The mere technical use of IoT, however, does not generate any business value per se, neither for the product, for the process nor for the company as the product or process owner.

One of the challenges that must be considered at the beginning of a project is the definition of the so-called use case. The use case describes, among other things, the possibilities for users or external systems to interact with the considered system.

These interactions should converge to a corresponding entrepreneurial objective because the IoT application receives a value in the product or in the process then by the later application. This application is built on the technical foundation, such as data or new communication possibilities. The basic categories in which IoT applications can achieve value and how this can be quantified will be discussed in a future article.

In practice, the use case definition for IoT often faces the challenge of switching between the technology perspective and the application perspective and systematizing the ideas of what could be achieved with a sensor, a sensor combination or an evaluation of the collected data. The present paper contains a methodological proposal on how the back and forth from technology to the application as well as from application to technology can be systematically used in the use case definition in the idea phase.

What is "if this then that"?

The service provider IFTTT launched its platform on the US market in 2010. IFTTT means "if this then that" and describes a simple set of rules with so-called recipes, which react based on an input event ("if this") with certain actions ("then that"). The application has been highly praised in the media, which is most likely due to the simplicity of the application. Simple automation tasks can thus be solved quickly, whereby inter-connection of various systems from different manufacturers via triggers and actions is possible.

Basic Idea of the IFTTT Pattern

For example, the detection of a person by the cameras of cloud supplier A can switch on the light in the house via cloud supplier B. SWMS has already featured IFTTT in an earlier article from a technical point of view. But how does the approach behind IFTTT help with the use case definition for IoT systems?

The "IFTTT" Thinking Pattern for Use Case Finding

The simple structure of the thought pattern behind IFTTT helps to find the use case, which in practice takes place in the field of tension between technological possibilities and actions or values to be achieved. It is important that we only follow the simple pattern of thinking here and first free ourselves from the actual technology. It is only a matter of basically determining that certain technical prerequisites (alone or in combination) leading to actions that then determine the value of IoT for the product or process.

Swimming pool Use Case

The simplest case to start with is often the recording of the status quo with regard to acquired data and sensors used because in most cases IoT projects are characterized by the fact that an already existing system or process is to be "digitized" and further automated. Therefore, start by clarifying the current structure of your considered system. It is relatively independent whether it is a product or a process you are looking at. For example, a product already uses various sensors for its internal control and thus switches actuators, such as in the swimming pool sketched above. In a process, start and end times, quality criteria or certain machine settings may already be recorded, albeit manually and only sporadically.

From technology to application

From this collection, ideas for new applications of physical measurements and collected data can now be derived and formulated according to the simple idea of recipes à la IFTTT. It is important that possible technical hurdles are consistently ignored. It is not a matter of determining the specification in detail, but of formulating an idea systematically and in principle, which describes which technical situation recorded ("if this") should result in which statement and possible action ("then that"). A simple pattern using the swimming pool as an example: "If the outside temperature > 26 °C, then switch off the heating". Of course, the potential of IoT is by no means exhausted here, but you can model your use case according to the same scheme with more complex rules. Example: "If the chlorine concentration is low and the number of visitors high and the last chlorine order x days ago, then order new chlorine automatically". Here the value of automation of the business process and the service becomes much clearer because the technical parameters are used in combination to generate this value: Namely, a simplified and automatic ordering system, which significantly simplifies a task for the operator and at the same time allows a pool builder to establish an after-sales business.

From the application idea to the technology

The application idea can also take the initial role in companies. Let's stick to the example of the pool. A pool builder can, of course, first ask himself how he ties his customers to him, for example after the sale and establishment of the pool, and may also come up with the idea that, among other things, the continuous supply of chlorine to his customers would be an interesting business. He is also convinced that he can persuade his customers to order chlorine from him continuously if he succeeds in automating the control, monitoring and inventory management that is so annoying for the customer. "We have all the necessary data for this, don't we?" is the central question that can also be methodically used with the help of "if this then that" modelling in such a way that the event triggers ("if this") can be defined with the necessary technical conditions from the action (supply of chlorine, "then that"). Often it is the stakeholders, i.e. customers, internal service employees, product development, etc., who are the idea givers or addressees of these actions.

The true strength - third-party systems and more

In our opinion, the success of the IFTTT provider lies above all in the fact that manufacturer-independent connections were guaranteed. With this, almost infinite combinations of rules and regulations can be created without great technical restrictions due to the functional scope of individual manufacturers. Even when modelling the IoT system using this methodology, one should be aware that early technical restrictions should be avoided. In both cases, therefore, correlations can be formulated vaguely, i.e. for example, "if anomalies in the vibration sensor at the pump increase over time, then send a message to the connected ERP system".

Any data sources or events that are conceivable are suitable for the definition of use cases. Especially external data sources and systems offer incredible potential for innovative approaches. Sources are for example

  • Third-party systems, e.g. tweets about the company, social media data
  • Inventory systems, e.g. messages via the ERP system, events from the SCADA system, PLC data
  • External data, e.g. weather reports, environmental values from building services engineering
  • Open Data, e.g. traffic data, market data, statistics, ...
  • etc.

Conclusion

The term "method" is almost too broad, but in this paper, we have presented a simple procedure for systematically deriving the application and action ("then that") from the technical possibility ("if this").

IFTTT Schema for the generation of use cases

The second way from the application idea ("then that") to the necessary technological prerequisites ("if this") is also possible. Embedded in the description of the status quo, consisting of

a) the product or process to be 'digitised',

b) the technical capability or data it possesses to date; and

c) the stakeholders interested in the actions,

this procedure leads to a simple description of possible Use Cases and thus to the possibility of deriving the Use Cases, which find further consideration in the context of an IoT project.

Download PDF for the development of the IoT Use Cases

A template for the development of the Use Cases is available here for free download (PDF). No registration is required for the download.

]]>
https://www.swms.de/en/blog/with-if-this-then-that-to-the-use-case-for-the-internet-of-things/feed/ 0 https://www.swms.de/favicon.ico https://www.swms.de/favicon.ico
The CAD-CAM Process Chain: The Status Quo (Part 1) https://www.swms.de/en/blog/the-cad-cam-process-chain-the-status-quo-part-1/ https://www.swms.de/en/blog/the-cad-cam-process-chain-the-status-quo-part-1/#comments Tue, 26 Nov 2019 00:00:00 +0000 plm https://www.swms.de/en/blog/the-cad-cam-process-chain-the-status-quo-part-1/ Weiterlesen

]]>
How do we define the CAD-CAM process chain?


The "CAD-CAM process chain" is a frequently mentioned and recurring buzzword in the field of production-related design. But what is this anyway? And who benefits from that? We want to address these questions together and try to find some answers. 



Digitization as a puzzle piece to the finished product

Let's first look at the use of computers in design and manufacturing. The use of CAD software (Computer Aided Design) has been widespread in the field of design for many years. For this purpose, the software market offers a wide variety of products from different manufacturers. The user is therefore spoilt for choice. Usually the selection of a CAD software takes place under economic aspects with consideration of the individual requirements of the enterprise. A company from the automotive sector naturally has different detail requirements than a steel construction company. Common to all users and applications is the generation of design data for the respective purpose. With today's common 3D systems, the generated data can be directly viewed on the screen and, if necessary, adapted. This is the main focus of each of the systems, because no parts can be manufactured or even sold without design. Many company-internal strategies and procedures concentrate accordingly on the construction range and the tasks directly connected with it. Improvements to increase efficiency, to accelerate processes or to save work steps also focus on this point.


The path of the different media to the end product

But how does the designed part become a marketable product? Only by converting the design work into an end product it is possible to offer the customer the desired object. As a rule, we call this step "production". Now this is a very broad term. Mechanical engineering alone already knows six main processes here, including primary forming, forming, joining and separating. The general task of production is to use these processes, possibly including several intermediate steps, to create a product from various starting materials that conforms to the design specifications. The greatest challenge here is to extract the defined properties from the design and reproduce them in a real workpiece. This requires a flow of information from design to production.

The use of production drawings is classic and still widespread today. Although the profession of technical draughtsman no longer exists in its original form, drawings are still created and handed over to the production department by frequently specialised employees. Experience has shown that companies invest considerable resources in the form of time and money in the creation and implementation of processes for drawing in accordance with standards. Existing CAD systems are customized, external software connected and PDM systems extended. Once the drawing has been created, it is printed - often in several copies - and transferred to production.


klassische CAD-CAM-Prozesskette

Figure 1: Classic process chain


Production, which is often run as a separate department within the company, is now faced with the challenge of preparing the documents provided in paper form for the mostly highly mechanized and automated production process. Different processes are used for this purpose. Small companies with a usually small number of production machines manually enter the required data into the corresponding systems, generate the NC code on the system and manufacture the corresponding parts. This process takes a considerable amount of time, but there is still considerable potential for errors due to incorrect entries or interpretation errors. The latter are then often attempted to be countered with further refined production drawings.

The use of an independent CAM system (Computer Aided Manufacturing) in manufacturing is also similar. The data flow from design often reaches the affected employees in paper form as well, sometimes supplemented by 3D models in the form of proprietary formats. The exchange of data via formats such as Iges, Step or others alone again causes a media discontinuity, which not least leads to an increased effort in data maintenance.

During the subsequent NC programming, existing information is re-entered into a computer system, the CAM system. CAM systems are also offered in large numbers on the market. They also handle different types of data and can process them more or less well, depending on their individual characteristics.


Diffuse data in NC programming

The aim of this work is to generate an NC program and to design it in such a way that the desired product can be manufactured in the specified quality with the aid of one or more production machines. But this sounds much simpler than it seems at first glance: The transition of information regarding quality - for example, the surface quality - is inhomogeneous and potentially faulty due to the system. Frequent coordination between the departments and the resulting change loops become necessary. An essential part of the NC programmer's task is therefore often the procurement of information on the desired product in order to be able to select the suitable material and manufacturing process.

Once all the information has been obtained, the actual NC programming, the original task, can be started. The result is a command set for the machine tool, with which the desired product is produced from a defined blank. This finished part is the product of all the efforts of the production department. The following figure shows an overview of the exemplary data formats involved.


Datenformate in der Fertigung

Figure 2: Example of data formats used in the process


An external observer often notices the confusing amount of different data formats, file repositories and responsibilities, which often ties up valuable resources and obstructs the workflow. Let's be clear: CAD files are updated in the design department, exchange files are created and drawings are created and printed. The transfered data as well as CAM and NC files are now stored in production. All of these have to be stored, revised and made available - quite an effort - and one that not least needs to be well administered.

Now an important question arises: Is the media break between design and production really necessary or even desirable?

In the next article we will therefore deal with the question of what possibilities exist to reduce or even completely avoid media disruptions.


]]>
https://www.swms.de/en/blog/the-cad-cam-process-chain-the-status-quo-part-1/feed/ 0 https://www.swms.de/favicon.ico https://www.swms.de/favicon.ico
IoT vs. IIoT - Differences between Consumer Products and Industrial Systems https://www.swms.de/en/blog/iot-vs-iiot-differences/ https://www.swms.de/en/blog/iot-vs-iiot-differences/#comments Sun, 17 Nov 2019 18:03:00 +0000 consulting iot-berater https://www.swms.de/en/blog/iot-vs-iiot-differences/ Weiterlesen

]]>
The term Internet of Things describes the connection of systems, devices and everyday things to the Internet. The term basically does not distinguish the exact way of connection or the resulting design of the application. Industrial Internet of Things (short: IIoT) is a collective term that describes the connection of industrial machines and production plants. The term IoT, on the other hand, is often used to refer to ubiquitous, consumer-oriented IoT products.

IoT and IIoT - A technical difference?

If you look at the application examples for the Internet of Things already known today, you can distinguish between two large categories with regard to the user target group: On the one hand, consumers and end users are in focus and use IoT-based applications. These are, for example, everyday systems such as smart household appliances, app-controlled lighting and heating in the smart home, services in the car or digital voice-based assistants. On the other hand, more and more machines in our industry are connected via the Internet and feed our visions of digital factories in the so-called industry 4.0 age. The addition of "Industrial" before the term "Internet of Things" often highlights a special application category (the so-called IIoT).

As we have already described in our definition of the Internet of Things, there is no significant difference in the technical core of both application examples. The often mechanical system under consideration in its electrical control is given the additional capability of recording environmental parameters by means of sensors. In addition, the actuators can be influenced. In both directions, applications on the Internet can gain knowledge from the collected data and pass it on or transfer decisions made to the actuators. From the point of view of the basic technical principle, there is initially no difference between a smart coffee machine in our consumer kitchen, which records and re-orders the filling level of the beans, or an industrial coffee roaster at our coffee producer, which records and controls the temperature as well as the degree and duration of roasting.

System integration vs. classical product development

The organizational framework conditions from which the innovation of an Internet-capable, networked system is to emerge differ much more than the actual technology. In the industrial context, the market was and is often characterised by so-called system integrators, who plan, assemble and ultimately put into operation integrative solutions for their customers from machines, machine components or system construction kits of the automation world. Often the classical switch cabinet construction, a classical 24V-based low-voltage network for industrial sensors and actuators as well as customer-specific programming in programmable logic controllers are characteristic for this environment. In our opinion, the challenges for the IIoT do not lie in the technology itself. System integrators certainly have a lot of catching up to do when it comes to dealing with Internet-based protocols and data connections, and the safety issue cannot be dismissed either. Nevertheless, almost all suppliers of programmable logic controllers and many suppliers of machine systems have long since integrated sufficient access options to the process images of the control programs. The mere reading out of data in the sense of an initially continuous monitoring is therefore, contrary to what is often claimed, less complex and often no witchcraft. The subsequent applications, i.e. how you as a user of IIoT-based systems, e.g. with your networked production line, become faster, more flexible and safer and thereby increase your turnover or significantly reduce your costs, will generally mean a greater effort than the technology.

The effort for the development of IIoT solutions is already different in classical product development. In many cases, the development teams have grown over many years in the development of electronic components, for example for existing products such as household appliances but also cars. In comparison to the often very individual production lines in the industrial context, these products are mass-produced and therefore have individual electrical circuits. Sometimes there are even industrial systems on the market that rely on their own electronic developments instead of programmable logic controllers. The technical hurdle to achieve a connectivity of the systems to the Internet in the first step is sometimes more difficult, since in development projects the connection to the Internet, be it via Bluetooth, WLAN or wired, often has to be implemented in the form of additional hardware. The connection of the protocols with which the data exchange of sensors and actuators with central IoT platforms is to take place also requires the adaptation of the software used on the microcontrollers. Due to the variety of approaches, the learning phase is usually not insignificant for connecting so-called embedded systems. As good news, however, more and more components are available on the market that make the start and subsequent implementation much easier. This is partly because the applications and business models that follow from connectivity can be successfully established for more and more companies and best practices can be copied.

Different tasks that IoT and IIoT should fulfil

IoT and IIoT therefore often have to fulfil completely different tasks in addition to the organisational framework in which the solutions are (or should be) created. Of course, a hard demarcation is not quite accurate, but the IIoT and Industry 4.0 projects are often very strongly directed inwards, into the own company and into the own processes. They aim at a progressive automation and thus at a further efficiency increase of the processes. It is therefore not surprising that frequent use cases include preventive maintenance, quality assurance or post-calculation of orders, i.e. those areas which often have high information requirements and which previously had to provide relatively complex and thus expensive support processes in industrial production due to a lack of information.

If systems are delivered to customers, who then continue to be operated to different extents, e.g. through maintenance contracts with the customer, remote monitoring has become almost unavoidable in order to avoid having to constantly carry out expensive service visits on site. To a certain extent, this already includes aspects of the product-oriented IoT systems.

IoT vs. IIoT Schematics

IoT projects and systems are much more outwardly oriented towards the customer. Services can be offered around a core product which, supported by the technical possibilities of connectivity, bundle the benefits in kind of equipment sales with those of services: The washing machine automatically orders detergents on demand, the refrigerator automatically orders new milk, the kitchen machine has integrated recipe books and the car offers the best routes to the best restaurants in town in its navigation system. The business models and the resulting income streams can change significantly due to the technology in products - and not only in the consumer-oriented business. However, they have a significantly different objective than inward-looking IoT projects and are much more difficult to assess in terms of profitability. At the same time, the chances of developing new unique selling propositions in the market or opening up completely new markets naturally increase.

Common challenges

Especially in the early planning phases in companies it is worthwhile to analyse the different characteristics of IoT and IIoT in detail and to deduce from this which type of project is actually involved. This is not necessarily the black-and-white distinction between IoT and IIoT, but the basic objective has an enormous influence on further steps and tasks that have to be considered in the context of an entrepreneurial decision. Therefore, make the following points clear to yourself:

  • What is the basic IoT strategy? Do you focus internally or/and externally?
  • If you focus externally, what kind of innovation do you intend? A product innovation for your customer? A service innovation? A process innovation for your customer?
  • In the case of internal application: What efficiency gains do you want to achieve? What does the ROI look like?
  • In the case of external application: How do you want to achieve turnover and profit? How do you differentiate yourself in the market? What does the business plan look like? What do you have to invest?
  • What technical basis do you have today? Do you use programmable logic controllers and can you make use of the IIoT innovations of the controller manufacturers? Do you have your own electronics that you need to make internet-capable?
  • How is your team set up, where does it come from? System integrators have different thinking patterns than electronics developers, but presumably both have know-how deficits as to how IoT can be implemented in their respective environments.
  • What applications do you need to achieve your business goals after the technical connection? Anticipate the if-then scenarios: "If I have the information on the operating cycles of the components on a machine, then 90% of the time I know which parts have to be replaced during maintenance due to wear and tear".

In all planning steps, consciously free yourself from past approaches and focus on customer benefits, whether internal or external.

Conclusion

No customer wants to set up a complicated VPN connection and then control a lamp in a special tool by keyboard input. You've probably already seen what "Alexa, switch on the light" does. Break through old patterns and concerns, especially in the planning phase, because the technology with which ideas can be implemented slimly and safely is developing rapidly. The IoT technology is therefore the bottleneck in IoT projects much rarer than often thought and what is not possible today will certainly soon be possible in consideration of the enormous technical progress. The distinction between IoT vs. IIoT does not make much difference here.

Our summary is therefore: Don't waste too much time defining IoT vs. IIoT, but take on the ideas and resulting tasks that lead to a long-term IoT strategy. Create the technical and organizational possibilities by connecting the components and training your team and then work step by step on the realization of your goals, regardless of whether it's about saving costs or generating new revenues, but don't mix everything up at will. Take the technical challenges seriously, but don't let them become an obstacle. Basically, the possibilities are limited only by your creativity and imagination. 

]]>
https://www.swms.de/en/blog/iot-vs-iiot-differences/feed/ 0 https://www.swms.de/favicon.ico https://www.swms.de/favicon.ico
What is IoT? A Definition of the Internet of Things https://www.swms.de/en/blog/what-is-iot-a-definition-of-the-internet-of-things/ https://www.swms.de/en/blog/what-is-iot-a-definition-of-the-internet-of-things/#comments Tue, 08 Oct 2019 21:34:00 +0000 consulting https://www.swms.de/en/blog/what-is-iot-a-definition-of-the-internet-of-things/ Weiterlesen

]]>
The Internet of Things (IoT) describes the ability to link all kinds of devices. Due to the increasing ubiquitous access to the Internet through WLAN and mobile communication, everyday devices and systems from the private and professional environment can participate in the Internet through ever smaller and cheaper computing power. As a rule, this Internet connection is accompanied by the fact that devices are able to transmit data which they record at their location and in their context using sensors. They can react to control commands through other higher-level applications and thus do something desired. Examples range from switching a lamp in the smart home to monitoring and controlling large wind power plants.

Is IoT technically complex?

The Internet of Things is basically first of all the logical, technical evolution of the Internet, which is facilitated by the fact that, firstly, infrastructural access to the Internet is becoming increasingly ubiquitous and, secondly, the necessary computers in the form of microcontrollers have become so powerful and inexpensive that everyday objects and systems can be connected relatively easily to the Internet.

Smart Device en

A device or technical system that already operates electrically, with the ability to communicate via the Internet, does not generally require any significant innovation. However, some pitfalls often arise during the actual implementation, such as the selection of an appropriate technology, the type of connection, the security of communication and the selection of the server-side systems usually required in the so-called cloud. What is still manageable for the manufacturer must not, however, lead to frustration for the end customer as the main user of the system. The connection should work like plug & play for the end user.

Why do some refer to a revolution in our business world regarding IoT ?

Technical complexity, as described above, is not a revolution, but it is the foundation for far-reaching changes in our business world. Once devices are permanently available to customers, manufacturers and service providers, their value is often no longer determined by the core product itself. This makes it possible to establish new business models that can become completely different sources of income for companies. Take a look at a current vehicle: Information and communication technology (ICT) has found its way into this area to a large extent. Modern vehicles use a variety of sensors to record the current status and context and transmit location information, fuel levels and service information, for example. The ubiquitous network connection provides the vehicle with route information, the latest music, vehicle updates and other features, such as driver assistance systems.

iot device en

The services based on the technology are currently still quite classic and traditional, so you get live information on traffic and the latest music in a monthly or annual subscription. The workshop may also contact you proactively. However, if your parcel carrier is able to securely store your parcel in the trunk of your car using a secure one-time code, as various car manufacturers and parcel services are already testing (https://www.heise.de/newsticker/meldung/VW-Paketlieferungen-in-den-Kofferraum-kommen-ab-2019-4143517.html), you will notice that IoT can nevertheless facilitate revolutionary business models and services as well as cross-sector cooperation.

Will IoT affect my business?

IoT and technology enthusiasts would immediately answer this question with a clear "yes", but it is still worth taking a differentiated look at this question. While IoT technology has been available for several years and the technical challenges are generally not so high, actual business models and successful examples are still rare. We are still at the beginning of development here, only innovators and early adopters have actually addressed the issue so far and are testing first applications as well as services and business models based on them. Typically, after this phase, the early majority of companies that will follow are hesitant at first. This is referred to as the "Through of disillusionment", in which the hype flattens out and exaggerated expectations return to normal. Experience shows that with the entry of the early majority, the technology will be used solidly and productively.

iot lifecycle en

In this context, it is the right time to consider whether IoT will affect my business. You can learn from the mistakes of the pioneers and focus on the "what" and "how" of your future value creation based on reliable technology. As a general formula we give you:

If the Internet has already changed your industry and your business in recent years, then it is likely that this can also happen through the Internet of Things.
]]>
https://www.swms.de/en/blog/what-is-iot-a-definition-of-the-internet-of-things/feed/ 0 https://www.swms.de/favicon.ico https://www.swms.de/favicon.ico
The Arduino-plattform - SWMS Playground https://www.swms.de/en/blog/the-arduino-plattform-swms-playground/ https://www.swms.de/en/blog/the-arduino-plattform-swms-playground/#comments Wed, 18 Sep 2019 13:21:00 +0000 software-development https://www.swms.de/en/blog/the-arduino-plattform-swms-playground/ Weiterlesen

]]>
Physical Computing Platforms

With physical computing platforms such as the Arduino, smaller projects can be realized quickly. Such platforms are a good introduction to microcontrollers and programming.

Especially in the short time of student internships it is difficult to get familiar with larger projects and without previous knowledge of microcontrollers or programming, almost impossible. Boards like that in the Arduino-plattform are a good way to enable students without previous knowledge to work independently on projects.

Here you can find out what we have already been able to achieve with a few internships and what we still have in mind.

Welcome to the SWMS Playground.


The Board (Arduino-plattform) and the Ideas

At the beginning was the board. It is an Arduino UNO Rev3 with 14 digital inputs and outputs, 6 analog inputs, a clock rate of 16 MHz and the best: a simple USB port to start directly with any computer. The right development environment is on top of it for free.

The idea is from an article by Jessica Kelly (https://de.scribd.com/doc/170043613/LED-Cube-8x8x8-mit-Arduino-Uno-Rev-3-pdf): An 8x8x8 LED cube in which each LED can be individually controlled via the board.

About 1000 solder joints later it is already there, all 512 LEDs through the first programs pulled out of the net on the sparkle. But that's not enough for us, of course. What can you do with a program controlled 8x8x8 display? Clear thing: "Snake" -3D.


The SWMS LED Cube

Fig. 1: The LED Cube


The inside of the SWMS Cube

Fig. 2: Inside the Cube


We developed the program using Arduino's processing-based development environment. A program for the Arduino always consists of three parts. The first part is for embedding libraries and defining global variables. The second part is the "setup" method. It is executed once at the start of the program. Here basic settings can be made, such as defining a pin as input or output, or the control of a servo is passed to a PIN. The third part is the "loop" method. It is always repeated. This should be the actual program sequence.

To make an LED connected to PIN 6 blink, PIN 6 must first be defined as an output in the "setup" method. In the "loop" method, PIN 6 can then be switched on or off using the "digitalWrite" method. A simple program could look like figure 3.


Example of an Arduino-Plattform

Fig. 3: Program Example


In search of an emulator for the Arduino we found Tinkercad from AUTODESK. Tinkercad is a free 3D design and 3D printing app that runs in your browser. With the Circuits Module not only simple circuits can be simulated, also the Arduino. With simple "blocks" whole programs can be created and the actual code is made available for download. The simulation also shows excessive loads for the components. Incorrectly dimensioned components can be determined in advance. A separate parts list is generated for each project. So we were able to create our own controller for our game in a short time, create the necessary lines of source code and integrate them into our program.


TinkerCAD

Fig. 4: Example TinkerCAD

We want MORE

We have learned a lot from the projects with the Arduino. We had a lot of fun getting results so quickly with very little effort and we want even more.

On the Internet there are thousands of projects and ideas for boards like the Arduino. And even we have more and more ideas to use the Arduino in small projects. From the transformation of the Arduino to the Game Boy, to the full automation of our coffee machine.

Since the Arduino board can easily be upgraded with a WiFi or Ethernet shield, our interest is also strong in using the Arduino as a drive for IoT prototypes.

What happens to the Cube?

The Cube with Snake -3D was a great project and who wants to end it? So we plan to expand our Snake -3D. We still have a kinekt. So if we manage to connect them to the Arduino...



Feel like it?

Would you like to work on small projects with us yourself?

Do you want to do an internship and are interested in microcontrollers and/or programming?

Can you hardly sleep because the flood of ideas is keeping you awake?

Then get in touch and we'll meet at the SWMS playground 🙂

]]>
https://www.swms.de/en/blog/the-arduino-plattform-swms-playground/feed/ 0 https://www.swms.de/favicon.ico https://www.swms.de/favicon.ico
Getting Started! - How can digitalization projects be successful? https://www.swms.de/en/blog/getting-started-how-can-digitalization-projects-be-successful/ https://www.swms.de/en/blog/getting-started-how-can-digitalization-projects-be-successful/#comments Tue, 16 Jul 2019 11:57:00 +0000 consulting plm https://www.swms.de/en/blog/getting-started-how-can-digitalization-projects-be-successful/ Weiterlesen

]]>
Digitalization. A mega trend, hype and revolution all in one, but at least one topic that affects all parts of a company, from the management down to the employees.

The need for companies to deal with this topic is being increasingly emphasized by scientists, experts and politicians, and with full justification. The (digitalization-) mechanisms and technologies are so diverse and powerful that every company can make use of them and grow with the design of digital products, processes and business models.

In our last blog article we already analyzed the most important reasons why digitalization projects fail . The main reasons from our point of view as software developers and technology consultants are summarized as follows:

  • Activism according to the motto: "Let's do something digital".
  • Misguided investment planning
  • Radical "Heave Ho" implementation strategies
  • Non-inclusion of those concerned
  • Lack of user orientation on the projects
  • Missing technology know-how

Small and medium-sized enterprises (SMEs) in particular find themselves in a dilemma as a result of these points. On the one hand there is the risk of not getting the expected benefit from a digitalization project and on the other hand the many possibilities of digitalization must not go unnoticed.

How can digitalization projects be successful?

There cannot and must not be a common answer to this question. Digitalization projects may or may not make sense, from hairdressing salons to the production of high-precision machines. Nevertheless, a structured approach and a suitable project framework greatly increases the probability that your project will be successful.


Digitalization Strategy


The Digitalization Strategy

The issue of digitalization affects the entire company, which is why networking employees from different departments and levels is the basis for a sustainable definition of strategy. The topic of digitalization in the corporate context is presented to the people involved as an open-ended basis for discussion, and the specialist departments work together on topics in which fundamental potential is seen.

Experience has shown that an extensive list of topics is created in such a topic. On the basis of these topics, in line with your general corporate strategy and orientation, a digitalization strategy with short, medium and long-term goals will be drawn up, which prioritizes and clusters the individual topics. The alignment of the strategy can refer to these three areas, process, product and business model.

The most obvious potential for digitalization is in the area of processes. This can involve production, logistics and business processes. Therefore, the process environment is a good place to gain experience with digital technologies. Nevertheless, the product and business model level should not be disregarded. Consider the questions of how your product can be changed. What data can you collect and how can you or your customer use it? How will your business model change?

Digitalization model


A road map can be derived from these considerations, which should also contain a rough timeline. A recommended approach is to identify a department in which a pilot project can be initiated.

The Requirement

A project can only be successful if there is a real need for its output. The basis for this is a detailed NEEDS analysis. Proven methods for ACTUAL analyses can be used. In production for example, procedures from lean management and value stream analysis can be applied. A typical result is that production is based on printed production orders.

It must be checked whether a digital business case exists and whether the identified pain points are suitable for digital production optimization. In the above example, it is conceivable that the paper-based production orders could be replaced by digital orders that the employee receives via a tablet.


Cost of Digitalization


Process improvement is about reducing costs and improving quality, but it must also be made clear to decision-makers that other factors, such as the ability to change processes and increase customer benefit, are of great importance. In addition, the effects of changes on other processes and departments must be taken into considerations in any digitalization project. In our example, an order call-off via tablets can simultaneously mean that information (e.g. processing times, machine occupancy, fault messages, price calculation, etc.) is recorded, which in turn can be used for capacity planning and (automated) calculation of a delivery date.

A recording of the user stories for the application case helps to create a deeper understanding of the changes. The user stories describe a user's requirements for the converted process. They are recorded for each stakeholder in the format "I want as <role> <what>, <why>". This description allows the development process to focus on the requirements.

The Technology

A variety of technologies, software products and services are available on the market. Which technology is the right one is largely determined by the factors of application, corporate philosophy and general conditions. Using the example of the conversion from paper-based production orders to a tablet application, the associated input and its processing can be recorded via an MES system, as offered on the market by many manufacturers. An alternative is classic individual software development, which implements the desired functionality in an order-specific project. A second option is the use of low-code environments with business intelligence solutions and other cloud services.

Digitalization Technology


In principle, it is advisable to map standard business processes with standard solutions and individual processes with individual solutions.

The implementation process

In the sense of the sustainable implementation of a digitalization project, it is necessary to understand this as a project that creates the digital structures to digitally map existing processes, products or business models and to support them in the best possible way.

The involvement of the later users from the beginning of the project plays a central role for its success. No matter how good the system is in theory, if the later users do not accept it, use it and give constructive feedback, the probability of success decreases significantly. For this reason, prototypes should be developed in the early project phases (starting with so-called paper prototypes) that make digitalization a tangible experience and emphasize the benefits for the user. This assessment is supported by the "Digital Transformation 2018" study:

"Digital change is driven by people - not by technologies. Digitalization projects are therefore not technology projects, but business transformation projects that affect the entire company and its culture.

An agile approach to project implementation creates transparency and minimizes risks, as users give feedback quickly within short development cycles and changes can be implemented with little effort. It is also advisable to develop a "minimum viable product" which is a system with minimal but functional and productive content. This proven procedure is used to obtain action-relevant feedback and to check customer requirements.

Finally, all project participants must be aware that a digitalization project is inevitably followed by an ongoing process. In this process, the real and digital circumstances are continuously adapted to each other. This is particularly important in a product and process landscape that is constantly changing, whether due to changing customer requirements or internal process optimization.

Digitalization

The follow-up projects

Following a successful proof of concept project, other projects typically follow that address different problems throughout the company. It is very important to keep track of these projects and to avoid isolated solutions that generate positive effects but conceal the greatest strength of digitalization. The really great benefit only comes when different projects can benefit from each other because they make information available to each other that did not exist before. If these effects can be exploited, then digitalization has the potential to enable new business models and to trigger even more far-reaching changes. Therefore, it is worthwhile to check again and again during and between projects how the current business strategy can be rethought.

Conclusion

Digitalization is a broad field that can be confusing. The challenge is particularly great for SMEs that have neither a large R&D department nor a large IT department with the capacity to deal with digitalization issues. However, projects that create these capacities can have great effects, especially in SMEs. Managing directors, project managers and process managers should therefore think about how these capacities can be created in everyday business.

One possibility is projects that are initiated jointly by several companies. Another possibility is to get support from external companies.

Independent consulting and project support helps to identify technological and organizational development opportunities and to share best practices from completed projects.

When selecting a partner, care should be taken to ensure that it supports the development of the required understanding of technology within the company so that it becomes available to the company in the long term.

For initial projects, the two principles "Think big, start small" and "Fail often, fail fast, fail cheap" are a good orientation aid. Also clarify the questions "Why" and "With what goal" you would like to tackle a digitalization project. The individual project design depends entirely on the requirements of your particular case. Therefore it is important to deal with these requirements already in an early project phase. .

The described points help you to set a promising framework and to avoid the reasons for the reasons failure in digitalization projects. Please do not hesitate to gerne Contact us

If you have any questions regarding the implementation of digitalization projects.


]]>
https://www.swms.de/en/blog/getting-started-how-can-digitalization-projects-be-successful/feed/ 0 https://www.swms.de/favicon.ico https://www.swms.de/favicon.ico
Why do digitalization projects fail? https://www.swms.de/en/blog/why-do-digitalization-projects-fail/ https://www.swms.de/en/blog/why-do-digitalization-projects-fail/#comments Wed, 03 Jul 2019 16:01:00 +0000 plm https://www.swms.de/en/blog/why-do-digitalization-projects-fail/ Weiterlesen

]]>

A topic that is rarely discussed because it is difficult to present is the failure of digitalization projects. On the one hand this is understandable, since nobody likes to publicly emphasize the failure of a project that is driven forward with a lot of energy and possibly also a lot of money. On the other hand, the professional exchange loses a lot of experience. Similarly, the danger of a repetition cannot be countered by other participants for similar reasons. Therefore, some of the practical experiences gained in everyday consulting will be presented here and some of the reasons for the failure of projects will also be presented. 


global 2082635 1280


But first the question arises: What are digitalization projects? In this article, we will look more generally at the types of projects that attempt to create added value for companies or departments through the use of digital infrastructure, software-supported processes, the relocation of data streams, or the equivalent. As a rule, these are aimed at maintaining or expanding an existing competitive advantage in the market, as well as increasingly taking into account the challenges of demographic developments in the labour market.

"Let's do something digital"

There is no doubt that the will to carry out a project is an integral part of a project. It is also the right motivation with which companies and those involved in digitalization projects should start such a project. Here, concrete needs of employees and management for the improvement of workflows, planning and similar are recommended. Of course, it is not always possible to assume a unified view of all participants right from the start - coordination needs are the rule here and should be welcomed in every case.

However, if a project starts with one-sided (usually "from above") actionism in the direction of "Everyone is doing something digital now, we have to do that too", this is not necessarily the perfect motivation for subsequent projects. Bad advice is also given if the start of a project is based on the fact that " Everyone is doing this now", "Competitor XY is already much further ahead" or something similar. Internal possibilities and motives should always be found and used first. 


icon 1623888 1920 1


Misguided investment planning 

- ROI consideration investment courage is missing -

One question that is often asked at the beginning of a project, especially by conservative companies, is when a new digitalization strategy will pay off and what risks arise from an upcoming investment. These questions alone are evidence of a certain reluctance to courageously tackle new paths that may be perceived as unknown. At the same time, dedicated consultants and experts are trying to clearly identify possible risks, quantify investment needs and quantify the associated savings. However, this approach is not always applicable and not always reliable. The main reasons for this are that the entire internal structure of the company changes, especially in the case of comprehensive strategy adjustments. This process - even if it is carried out professionally and implemented with high quality content - always represents such an intervention that the results cannot be fully assessed in advance. Another reason that leads to the failure of promising projects is overambitious expectations of the resulting savings and improvements. ROI ( Return on Investment) expectations of 12-24 months play a particularly important role here. It is often forgotten that many - especially comprehensive projects - influence the company's long-term orientation and are intended to keep it competitive in the long term.

Heave ho! Variant - We have to change!

Within the framework of digitalization strategies in companies, a radical rethinking is often planned - often by the motives already mentioned - and a corresponding attempt is made. This is where the inexperience of internal decision-makers in such issues often takes revenge. Comprehensive, complex restructurings are planned, new processes devised and, if possible, all implemented in a single step. This approach is doomed to failure in almost all cases. Although a comprehensive, holistic view of the desired goals and of course of the activities required to achieve them is highly recommended, implementation in a single step is generally practically impossible. Here a step-by-step and planned Approach. If possible also the use of intermediate goals is absolutely necessary. 


Blog Digitalisierungsprojekte



But why?

On the one hand, it is incredible difficult to really specify very large projects from the very beginning completely correctly and completely through. On the other hand, small mistakes always hinder the progress of the "big picture" and at some point everyone waits for everyone - there is no more project progress. The idea of many managers to make a change like one flips a light switch is inappropriate at this point (see also https://www.diva-e.com/de/news/cdo-insights2019/). Neither does a software project behave like the purchase and commissioning of a new machine that is installed and ready for operation at a fixed time. Rather, one must speak here of a comprehensive, long-lasting process, which ideally continues on and on in order to keep up with the progressive further development of the market. 

Employee Base

Many large digitalization projects have a direct impact on the day-to-day work of the employees involved. Such an intervention does not always meet with a constructive reaction ready to cooperate. For the implementation of digitalized processes, the improvement of data handling and the integration of various automation options, the cooperation of the colleagues directly affected is particularly important. As a rule, you know the processes surrounding you best, are able to assess existing potential and are the first to recognize problems that arise in your daily work. 


hands 3978661 1920


Unmotivated employees of the specialist departments who do not want to get involved in the ongoing development of an ever changing process are a serious danger to the success of a project. Instead of appropriate and well-founded criticism of technical and organisational imperfections, a "I don't care" or "everything used to be better" attitude quickly emerges. This is easily suited to massively slow down constructive progress or to bring a project to failure.

Especially in the case of ambitious projects with short introduction phases, high conversion rates and a lot of structural change, this can lead to a sustained weakening of employee motivation.

This is a particularly serious fact, since the employees, as carriers of the know-how and support the innovative strength of a company, represent important elements of the sustainable orientation. Small, step-by-step planned, small changes in which the effort to actively involve the employee in the change process and to make the added value immediately noticeable are often more successful. 

Away from Demand

A further reason for the failure of digitalization projects is the failure to meet the needs of the company or the stakeholders. This sounds a bit abstract at first, as every project should focus on their specific requirements. However, it is quite easy to fail. One must remember that many participants whose needs are to be processed in such a project have little experience in the field of digital processes. It is therefore difficult for them to formulate their requirements and needs in such a way that it becomes possible for the implementing project participants to actually fulfil them. One can of course argue differently and say that those - here in particular external consultants who are supposed to support the changeover from outside - have too little knowledge about the actual processes and requirements.

Digitalisierungsprojekte Fehler


Either way, it is necessary to formulate these clearly and definitively and to transport them to those responsible. Far too often, shortcomings in the planning become apparent in the middle of the changeover process or only after it has been completed. This often leads to complex, expensive and often demotivating adaptations of the created solutions. The best way to prevent this from happening is to check the requirements with the persons involved on a regular basis in small steps. Here, errors can be quickly discovered and eliminated in the process before they have a profound effect. Of course, it is also important not to get too involved in details and to always keep an eye on the progress of the overall project. 

Another way to plan ahead of demand is to commit yourself too much to a certain technical possibility and to continue to pursue it even if it is no longer possible to achieve the goals you have set yourself. Predefinition can be based on existing software and hardware landscapes or on financial parameters. However, this does not always lead to the failure of projects. The early exclusion of technical and organizational problems can, however, greatly increase the risk of this happening. An open approach and the regular reflection of requirements and possibilities offer the chance to leave the chosen path before it ends in a dead end. 


Soon, a Detail professional Approach on how to start a Project properly will be Illustrated  here.

]]>
https://www.swms.de/en/blog/why-do-digitalization-projects-fail/feed/ 0 https://www.swms.de/favicon.ico https://www.swms.de/favicon.ico
Teamcenter RAC Customization https://www.swms.de/en/blog/teamcenter-rac-customization-2/ https://www.swms.de/en/blog/teamcenter-rac-customization-2/#comments Mon, 27 May 2019 16:16:00 +0000 plm https://www.swms.de/en/blog/teamcenter-rac-customization-2/ Weiterlesen

]]>
In addition to the possibility of using software such as the Teamcenter PDM system as supplied by the manufacturer, many customers take the opportunity to implement their own ideas of processes and views. The following example shows how the user interface of the Siemens Teamcenter PDM system can be customized and how the company's own program logic can be integrated.

What is Teamcenter RAC?

The Rich Application Client (RAC) is Teamcenter's most widely used user interface. With the help of the RAC, the user can access all Teamcenter functionalities that are important.

A Rich Application Client generally, a graphical user interface with its own logic, based on a Rich Client Platform (RCP). A rich client platform is in turn a framework used to develop plug-in-based applications.

What does this mean?

A plug-in is a software building block and can range from a simple function to a complex program. For the user, this means that the Teamcenter Client can be extended at any point.

Start designing the client in such a way that it provides you with the functions you need and transform "out of the box" into your personal Teamcenter. You use the manifold possibilities and set up the PDM idea in your company in such a way that the ideals and demands of your company are reflected for every user.

The following describes how to create a simple plug-in extension.

Extension Points

Extension Points are used to extend a specific position in a plug-in-based application. Extension Points are defined in the "plugin.xml" of the Plug-In and describe the positions where the Plug-In extends the RAC.

In the following example the context menu of the Teamcenter RAC is extended by a new menu entry. Three Extension Points are written for this purpose:

  • Org.eclipse.ui.commands
  • Org.eclipse.ui.handlers
  • Org.eclipse.ui.menus
Fig. 1: Teamcenter Menu extension

The "commands" Extension-Point, extends the RAC by a user-defined control command. The control command is an abstract representation and not yet the actual implementation. The actual implementation is created in the "handlers" Extension-Point. The handlers extension point refers to the previously created control command and to a class in our plug-in.

With the "menus" Extension-Point, the actual entry in the context menu is created. The entry refers to the previously defined control command.

Extension-Points

The result of this extension is seen in Teamcenter's RAC as follows.

Fig. 2: Custom context menu

As shown in Figure 2, an entry has been added to the context menu. It gets its display name from the "commands" Extension-Point specifically from the XML field "name".

Java Code

In the plugin.xml, a class is specified, which is to be called with the release of the control command. The class in the plug-in that is called must implement the abstract class "AbstractHandler". The execute function which we overwrite is called by triggering the control command, i.e. by clicking on the context menu entry. In this example a popup window appears with the message "Hello world".

Fig. 3: Handler Implementation

Service-Oriented Architecture

Service-Oriented architecture (abbrev. SOA), is an architecture pattern that brings together several (possibly distributed) services and maps them at a higher level of abstraction. Teamcenter offers such an SOA. The high level of abstraction makes complex operations such as creating an item with a few function calls possible.

When developing a plug-in for Teamcenter's RAC, these SOA services can be accessed. This enables a high degree of automation with customization in the user interface and low development effort.

Deploy

The central maintenance of the clients is always an important point for the administration. Adding, revising or removing plug-ins can be done via a shared folder in the network. All you need to do is create a ".link" file in the client's ".link" folder that contains the path to the shared folder.

Fig. 4: Client Plug-In Link

Different clients can be linked to different or multiple plug-in folders. This ensures that every user has access to the latest plug-ins that are needed for his specific application.

Conclusion

A RAC customization is a simple way to provide the user with new features in Teamcenter. A large number of functions can be accessed via the SOA interface. Since Teamcenter's RAC is completely plug-in-based, a menu entry is only the beginning of a comprehensive redesign of your system landscape. There are no limits to customization. From the custom views to the implementation of a completely new software, everything is possible.

Links:

]]>
https://www.swms.de/en/blog/teamcenter-rac-customization-2/feed/ 1 https://www.swms.de/favicon.ico https://www.swms.de/favicon.ico
How can my company get practical benefit from IoT platforms? https://www.swms.de/en/blog/how-can-my-company-get-practical-benefit-from-iot-platforms/ https://www.swms.de/en/blog/how-can-my-company-get-practical-benefit-from-iot-platforms/#comments Fri, 29 Mar 2019 14:21:00 +0000 consulting IoT https://www.swms.de/en/blog/how-can-my-company-get-practical-benefit-from-iot-platforms/ Weiterlesen

]]>
IIoT platforms are the central component in the Industrial Internet of Things

The so-called IoT or IIoT platforms play a central role in the Industrial Internet of Things, or IIoT for short. These support all tasks from the acquisition of plant data, storage, visualization, processing and analysis, to the implementation of actions based on the knowledge gained. The respective platform providers provide software components which can be supplemented by individual applications.

Individual developments versus platform use

The development of our own systems, which would have to map the complete process flow from data acquisition by sensors, data storage, visualization, data evaluation, data security to the programming of intelligent systems and machine controls, is opposed to this. The heterogeneous nature of the tasks alone makes the implementation of all functions extremely complex and time-consuming. For this very reason, the potential of industrial platforms is very high and companies can particularly benefit from two points:

  1. infrastructure 
  2. provision of a wide variety of services

MindSphere - IoT Platform  

Since many customers already use Siemens products in their manufacturing environments, we would like to take a closer look at the MindSphere - IoT platform from Siemens, present a concrete case study and demonstrate the extensive application possibilities. MindSphere is provided as a "Platform as a Service" environment and can be adapted to the multitude of application cases and required functional scopes through appropriate extensions and individual developments.

Platform as a Service, or PaaS, is a cloud system that provides a computing platform for application developers to eliminate the need to purchase and maintain hardware or software. The low entry costs, already usable services and scalability in particular speak for the choice of this platform approach.

The system enables any machine data to be recorded, stored and made available and processed in self-developed applications. Different frameworks (e.g. Angular, .net Core) and high-level languages (e.g. Java, Python) can be used. Also a REST API is provided, which makes it possible to utilize stored data.

REST stands for Representational State Transfer and describes a paradigm for web services. The REST API uses HTTP requests to access information. For example, there are PUT, GET, and DELETE requests. PUT creates or changes a resource, GET retrieves a resource, and DELETE removes the resource.

Division into North and South - what does that mean?

Ideally, an IoT platform is able to connect any type of device via any available interface (OPC UA, MQTT etc.) and to collect and evaluate various data. In MindSphere, the connection of networked systems is located in the so-called southbound.

Through the North - the Northbound, on the other hand, it is also possible to integrate applications tailored to the specific application, in addition to Siemens' own apps. These can either be hosted directly on MindSphere or the API can be used for external access to the data in MindSphere.

The MindSphere platform provides appropriate APIs for both the Northbound and Southbound, which allows for a corresponding openness of the system. 

iot plattform siemens mindsphere

Figure 1: Structure of an IoT platform using the example of Siemens MindSpher

"Making coffee" from the point of view of an IoT platform

Digitization, IoT and Industry 4.0 are the basis for new and impressive business models and enable far-reaching optimizations in logistics, production and business processes. Nevertheless, these topics are complex technologies that need to be understood and questioned before a further development of one's own business and processes can be implemented.

Model structures and prototypes are one way of stimulating the imagination. As you could conclude from the blog article order coffee with BMW-ConnectedDrive and IFTTT from Ingo, we at SWMS like to drink coffee. Therefore we explain the basic possibilities of IoT platforms with a sample project called "Coffee Machine".

Did you know? Since the beginning of the Internet, coffee machines have repeatedly served as a grateful application for testing new technologies. At Cambridge University, the first webcam developed there was used to monitor the filling level of a coffee machine via the web, saving employees unnecessary travel. Until 2001, the Trojan-Room coffee machine could be monitored via a webcam.

A simple coffee machine stands for a plant, a production machine or a process (hereinafter referred to as a "system") that is to be expanded using IoT technologies. The IoT levels are represented by the coffee machine as follows.

Data acquisition and control

The machine does not have any sensors or IoT functionality as standard, which enables communication via the Internet or a local network.

For the example project, three sensors are installed in the coffee machine. A temperature sensor measures the water temperature in the water tank and a sensor measures the temperature at the coffee outlet. In addition, an ultrasonic sensor is used to measure the filling level of the water in the water tank.

The sensor system is connected to an Arduino microcontroller, which has network access via a WiFi module. A management process is running on the controller, which controls data acquisition and network communication. 

mindsphere schaltung anbindung

Figure 2: Circuit used for the connection to MindSphere 

In reality, systems are equipped with PLC controllers and sensors. However, the collected data is often only used locally for controlling the systems.

Connectivity

Siemens provides a variety of options for connecting the local system to the IoT MindSphere platform. In our prototypical implementation, the coffee machine communicates with the MindSphere IoT Extension via the MQTT protocol and transmits the sensor values (temperature and level).

MindConnect IoT Extension is a MindSphere application from Siemens to easily connect systems without Siemens control (here the Ardunio Controller) to MindSphere via MQTT.

Data storage and visualization

The data is stored as so-called Time Series data, i.e. as pairs of sensor value and time stamp. For the first visualization of the data MindSphere provides a graphical interface in the IoT extension as well as in the Fleet Manager.

mindsphere iot extension visualisierung

Figure 3: Visualization of data (here water temperature) in the MindSphere IoT Extension 

This illustration shows when a coffee was produced. The standard tools can thus be used to create initial dashboards for monitoring systems, which can be used, for example, by a production manager to assess the condition of his systems and react to failures.

Data analysis and processing

However, it only becomes really interesting when the recorded data are linked to each other and to external sources. Applications offered by MindSphere can be used for data analysis and processing. It is also possible to host your own applications on MindSphere or to access MindSphere with your own applications.

For the coffee machine project, an application based on the Angular Framework was developed that displays the temperature data on a dashboard and converts the fill level into the number of remaining cups. Figure 4 shows the condition before a coffee was produced.

mindsphere vor dem kaffee kochen

Figure 4: Angular App on MindSphere before cooking coffee 

Figure 5 shows how the temperature at the outlet (red line) rises. However, the number of cups available in the tank decreases. 

mindsphere nach dem kaffee kochen

Figure 5: Angular App on MindSphere after coffee cooking 

Transferred to real applications, applications based on MindSphere can display, calculate and predict the condition of machines, among other things. Algorithms from the field of machine learning can also be used to process the data. It is also possible to actively generate alarms and notifications that inform the responsible employees when an event occurs (e.g. a limit value is exceeded).

Control of machines

The communication path from the IoT platform back to the system under consideration is not yet covered in this example project. In the future, the collected data will be used to generate control commands that influence the behavior of the machine and thus contribute to the automation of processes.

Security & Data Sovereignty

Especially in the production environment it is extremely important to protect data from unauthorized access and at the same time to be able to determine who is allowed to do what with the data at any time. The leading providers of IoT platforms have also recognized this requirement and therefore provide comprehensive security functions for data acquisition, transfer, storage and access.

Why is the investment in digitization sensible?

Before investing in comprehensive IoT and digitization projects, it is necessary to think about the goal of these projects for one's own business and answer the question of how these projects will help to increase business success. Using our coffee machine as an example, we present some possible views on the business model:

Coffee machine user

The operator/user wants to be able to carry out the actual purpose of a coffee machine - coffee cooking - at any time. For this, it is important that the machine is ready for operation at all times. Of course, the temperature of the coffee and the question of whether there is still enough water to brew it are exciting for this purpose. However, these sensor data are only relevant for "pure coffee cooking". An overview of still available coffee pads, average consumption and average costs and a mobile control system for the machine could also be interesting for the user. We will continue to develop our project in this direction in the future.

Coffee machine manufacturer

Two views of the data are important for the manufacturer. On the one hand, the view of many (coffee) machines of the same series is of great interest for corresponding forecasting of failure cases and uncovering any design and production problems that may be contained. On the other hand, the prediction or determination of the failure of a specific coffee machine is of importance for service provision for the actual user.

Service provider for the provision of coffee

A service provider who is responsible for the provision of coffee for events or in general is interested on the one hand in ensuring the supply of coffee, water etc. and on the other hand in being able to detect failures of used coffee machines promptly or even predict them.

One database for everyone - why is that so important!

It is precisely these different stakeholders of the coffee machine(s) that have different demands on the data and its intended use. What they all have in common, however, is that they can use the same database. It is therefore of crucial importance to provide all stakeholders with a platform for querying and evaluating your released data.

New business models through new opportunities

The prompt provision of user data in particular opens up new possibilities in the design of business models. For example, a provider could offer the provision of coffee as a service. All maintenance measures, consumption optimisation of operating materials (e.g. use of coffee beans), trouble-free operation of the machines are the responsibility of the provider, who is only remunerated for cooking coffee according to a pay-per-use model. Further innovative business models are favoured by the consistent use of the data and allow a new type of service-oriented business area to emerge.

Summary

The presented project shows the use of the Platform as a Service Systems MindSphere and shows the connection of sensors and the use of sensor data in a self-developed MindSphere application within.

A coffee machine was equipped with different sensors and connected to MindSphere via MQTT. The real coffee machine connected via IIoT mechanisms was then presented for demonstration purposes via dashboard and, in addition to the master data, essential transaction data and their use from different usage perspectives (stakeholders) were shown.

Visit us at the HMI and have a look at the UseCase

SWMS makes available the know-how for digital change, the use of Internet of Things technologies and the associated ideas and possibilities. The use of innovative technologies raises potentials and achieves tangible results. We support by understanding the requirements of operational processes in service and industry and developing tailor-made solutions. With a holistic approach we advise and enable optimal decisions regarding IT and technology in order to maneuver ahead in the waters of digitalization and industry 4.0.

The Democase "coffee machine" will be exhibited as an application example at the Hannover Messe Industrie at the booth of SWMS Consulting (Hall 5, D05). Information about our booth can be found here (https://www.swms.de/en/hannovermesse/).


]]>
https://www.swms.de/en/blog/how-can-my-company-get-practical-benefit-from-iot-platforms/feed/ 0 https://www.swms.de/favicon.ico https://www.swms.de/favicon.ico
Order Coffee with BMW ConnectedDrive and IFTTT https://www.swms.de/en/blog/coffee-order-with-bmw-via-ifttt-control/ https://www.swms.de/en/blog/coffee-order-with-bmw-via-ifttt-control/#comments Wed, 06 Mar 2019 15:37:00 +0000 IoT https://www.swms.de/en/blog/coffee-order-with-bmw-via-ifttt-control/ Weiterlesen

]]>
Which task is to be completed in this example?

Who isn't familiar with this? You arrive at your office in the morning and your colleagues are already sitting with a cup of coffee in front of their screens. So you have to go again and get the coffee from the kitchen yourself. As a rule, you are then allowed to bring your dear colleague the second cup of coffee with you. If you have an e-mail account and you happen to be a driver of a BMW with ConnectedDrive, we will change that now, because BMW Labs provides an appropriate service for the automation service IFTTT, which you can use to have these and other tasks done without any problems.

What are No-Code / Low-Code Automation Services?

With the help of so-called No-Code / Low-Code automation services, it is possible to create and manage workflows to automate certain processes without having to use a programming language. Since the "program" is omitted, the workflow, recipe or applet is used here to orchestrate the "flowchart". Mastering a programming language is not necessary. For example, it is possible to send messages to your mobile phone if the heating fails at home, to switch surveillance cameras on or off, when leaving the house or coming back or to switch on the light in the office when an e-mail with certain content arrives. Almost anything is possible. These services are often used in the Internet of Things (IoT) sector. Limits are only set if the automation service provider (Microsoft Flow, Tasker, IFTTT, etc.) does not provide the appropriate service (or connector). But even in this case, the Providers offer APIs and developer areas to build missing services as so-called user-defined services or gateways.

In the current case we use the IFTTT service - derived from IF This Then That. Further information (introduction and registration) can be found at:

https://ifttt.com/

IFTTT offers a wide range of services and ready-made applets for immediate automation of tasks. Popular applets and services can be found here, for example on Amazon Alexa, Philips Hue and Spotify, to name just a few. Here you can also find the already mentioned service of BMW Labs, which can fire the following triggers among others:

Trigger Fired
Arriving soon each time you are X minutes away from the target.
Enter an Area

every time you drive into a defined area.

Exit an Area

every time you leave a defined area.

Speeding

every time you exceed the speed of X km/h.

Driving started

every time you leave "P" in automatic mode.

Car is parked every time you select "P" in the automatic mode.

The "Enter an Area" trigger, which is to be fired when the vehicle enters a predefined radius on its way to the office, is therefore ideal for this task. This automated triggering of an action, after crossing a predefined boundary line, is also called geofencing, which is often used to de/activate alarm systems or surveillance cameras.

As another service we use Office 365 Mail to send a message to the colleague in my office who is supposed to put the coffee in my place, when the trigger selected above is executed 😉. Of course, you can also use other e-mail services, which are represented by corresponding services in IFTTT.

What does the concrete workflow with IFTTT look like?

The prerequisite for successful implementation is, of course, prior registration with IFTTT and BMW ConnectedDrive. Here are a few pictures of how to set up the applet and how to activate the BMW Labs widget in your car.

Via IFTTT the applet is created and set up as follows (picture gallery):


The applet is thus created in IFTTT and ready for operation. In the next step, the opposite side, the vehicle, must be set up accordingly.

What to do in the MMI of the vehicle?

In the vehicle, the BMW Labs Widget must be set up in the Multi Media Interface (MMI) Splitscreen (picture gallery):


The BMW Labs Widget is now installed in the MMI. By default, the split screen always remains visible. If the split screen is deactivated, for example to see a larger section of the map during navigation, the BMW Labs Widget will not work.

Further information about the widget used by BMW Labs can be found at IFTTT or at the following URL:

https://labs.bmw.com/

How does the applet work in the concrete case?

After starting the vehicle, the BMW Labs Widget is initialised and loads all IFTTT recipes. 

ifttt bmw labs widget geladen

BMW Labs Widget is loaded and displayed on the split screen

In the event history you can see that the IFTTT recipes have been loaded. This means that the vehicle or MMI is now ready to trigger the corresponding events. Let's go and see what happens. 

ifttt bmw labs ereignis ausgeloest

Event triggered / The e-mail is sent ;-)

Shortly before reaching the office, the desired event is triggered (Enter area - Trigger sent). The name of the "Location description" is also displayed. We called this "Work". According to the composition of our applet, an e-mail is now automatically sent to the colleague in my office.

Now all you have to do is put the coffee on the table.

ifttt bmw labs kaffee mit einem schuss milch steht


Conclusion

The compilation and setup of the IFTTT applet is very easy and can be done with a few mouse clicks. Due to the large number of integrated services and the applets already available, there is an almost infinite number of use cases that are just waiting to be automated. Especially in the areas of Home Automation, Internet of Things (IoT) and Office Automation, there are very good possibilities to make everyday life easier with the help of workflows or applets. 

The automation services mentioned above can also be used to automate much more complex workflows. As a rule, this does not stop at No-Code or Low-Code solutions. In order to automate more comprehensive production processes, for example, or to be able to better control employees in the field service and to meet the respective security requirements, the use of competent software developers is indispensable.

The purpose of this article is simply to show what possibilities the integration of services and devices that we use in everyday life can bring. One can well imagine to set up a derivative of this applet for coming home. It's up to you to decide, what you want on the table 😉

]]>
https://www.swms.de/en/blog/coffee-order-with-bmw-via-ifttt-control/feed/ 0 https://www.swms.de/favicon.ico https://www.swms.de/favicon.ico
Application extensions with Siemens NXOpen https://www.swms.de/en/blog/application-extensions-for-siemens-nx-with-the-nxopen-interface/ https://www.swms.de/en/blog/application-extensions-for-siemens-nx-with-the-nxopen-interface/#comments Fri, 08 Feb 2019 11:17:00 +0000 plm https://www.swms.de/en/blog/application-extensions-for-siemens-nx-with-the-nxopen-interface/ Weiterlesen

]]>
Siemens NX - small helpers, big Impact -

The use of existing standard systems in the company creates, besides the undisputed advantages of the use of widespread software, a limitation of the own processes to possibilities provided by the software. This does not only apply to office applications, browsers or ERP systems, but also to CAx and PLM software.

The aforementioned restrictions can cause unwanted additional work here, for example through many clicks and extensive methods, or simply prevent desired processes.

The major manufacturers of widely used applications are well aware of this fact and therefore frequently offer programming interfaces for extensions.

Many companies rely on powerful CAx systems for their individual processes in the areas of design, construction and manufacturing. Leading systems such as CATIA V5, SolidEdge or Inventor offer interfaces to extend functionality. Such an interface is also offered and widely used for the Siemens NXOpen application.

The so-called Siemens NXOpen interface offers the possibility to create and use custom application extensions for NX in different programming languages.

The interface is based on an object-oriented class structure that provides the user with all required classes and structures with their functions. This class structure is available in the C/C++, Java programming languages and as wrapper, the C/C++ structure also for the .NET languages C# and VB.NET. SWMS uses the modern language variants C# and VB.NET to extend Siemens NX.

Singletons as entry and formal conventions

The easiest way to get started with NXOpen programming is to record and evaluate journals. These can be recorded in the desired language in the corresponding menu and then serve as a template for your own development.

NXOpen Singletons

Figure 1: Siemens NXOpen Singletons

Journals follow a fixed scheme, just as their own applications should. For example, this scheme specifies the important function names, which are then used by NX to jump into its own application. The best example of this is the Main function, which in most cases is the main entry. In this function, the first connection points to the current NX session are usually set in recorded journals. The most important connection is the session i.e. the current session. The assignment is done by a relatively seldom used singleton, here the "GetSession". Starting from this session object, all data types, the parts loaded within the session, etc., can be used. The Singleton implementation allows the developer to use this call multiple times in the application because the same object is always returned.

Limits of Object Orientation for Recorded Journals

Recorded journals are an important starting point for your own development. They are used to gain an insight into the processes within Siemens NX and to record the appropriate use of the available objects and functions. However, this valuable tool also has its limitations. This is especially true in the area of user interaction. If a function is recorded in which the user has to make a manual entry - in our example the edge length of a cuboid - this is only recorded transparently in the recorded code. In the example shown below, in the case of edge lengths, you can see the numerical values entered ("100") without knowing their origin. Such and similar cases naturally present a small challenge on the way from the recorded journal to your own application. Cleverly substituted by custom input possibilities and can however quickly be further developed from such a journal.

NXOpen Journal

Figure 2: Siemens NXOpen Journal

The output of simple information, such as the number of components in the assembly, article lists, weights, dimensions and many others, is also a very popular and widespread application. The import and export of geometries and metadata also plays an important role in the development of NXOpen extensions.

Many of these capabilities can even be automated without a costly additional license.

By using journals one loses some comfort in the development, but also saves license costs compared to fully trained applications, which have to be signed with a special license.

Based on basic examples such as the one shown above, it is possible to develop applications of any complexity and use them in everyday product design. This includes the possibility to use individually designed user interfaces with the help of common frameworks (Forms, WPF, etc.) or to compile an NX-custom interface from prefabricated components in the so-called Blockstyler, an additional application in Siemens NX.

Example of user interface with Blockstyler

Figure 3: Magic NX Wizzard

Development opportunities with Siemens NXOpen

The NXOpen interface between the NX CAx platform and a self-created application offers various possibilities for the automation of the entire system. This opens up opportunities to simplify your own work. A typical example here is to automate frequently performed use cases, such as the export to an exchange format, so that instead of several clicks in the corresponding NX wizard, the export with the desired settings takes place with one click.

Conclusion

Even if many don't believe it at first: Such small aids save a lot of time and spare the nerves of the user.

They also provide a good introduction to an individually adapted environment. Design assistants, intelligent component adaptations and specific tests are further options for using the system more efficiently and reducing time-consuming routine work. Finding a way to work is initially difficult for some, but once a good basic framework has been created, the continuation of the work becomes increasingly easier. Small little helpers that are easy to implement and also convince some superiors to allocate time and capacities for corresponding activities.

]]>
https://www.swms.de/en/blog/application-extensions-for-siemens-nx-with-the-nxopen-interface/feed/ 1 https://www.swms.de/favicon.ico https://www.swms.de/favicon.ico
How do you teach Artificial Intelligence (AI) to work with Azure Machine Learning? https://www.swms.de/en/blog/how-do-you-teach-artificial-intelligence-ai-to-work-with-azure-machine-learning/ https://www.swms.de/en/blog/how-do-you-teach-artificial-intelligence-ai-to-work-with-azure-machine-learning/#comments Wed, 06 Feb 2019 15:37:00 +0000 IoT https://www.swms.de/en/blog/how-do-you-teach-artificial-intelligence-ai-to-work-with-azure-machine-learning/ Weiterlesen

]]>
Machine learning is currently on everyone's lips across all industries. The detection of errors in production and manufacturing processes as well as the predictive planning of maintenance measures are only some of the drivers that drive the application of analytical methods or the use of artificial intelligence "AI". The application of data science methods, such as machine learning, says a lot about the past behavior of machines, processes and people. It also enables predictions to be made about predictive analytics methods.

How does this complicated-sounding machine learning work at all and how can one test the applicability to one's own Question?

We therefore want to approach this complex environment step by step by creating a demo and training data and point out the expected added values. We use special cloud services from Azure (Microsoft Azure is a cloud computing platform from Microsoft) to illustrate the possibilities.

Preparation of test and training data

In order to build confidence in analytical methods, it makes sense to create test and training data that can be tested in the first step. Various approaches are conceivable for this. In our example we prepare the data in Microsoft Excel. Here we generate measured values that show us a production error if a certain temperature value is exceeded or not reached, as well as a range within specific limits. Specifically, if the process temperature exceeds 16 degrees, falls below -16 degrees and in the temperature range from -5 degrees to +5 degrees, an error occurs in the manufacturing quality. 

Temperaturabhängiges Fehlverhalten eines Motors

>

Figure 1: Temperature-dependent malfunction of a Motor

To simulate this, we create an Excel file with two columns. One column for the recording of the temperature values, later determined by a corresponding sensor. For demo purposes, we generate random temperature values from -100 degrees to +100 degrees. This is done with the help of a simple Excel formula (=RANDBETWEEN (-100;99) + RANDBETWEEN (0;999)/1000). In a second column we describe the state of a motor unit. Here we record whether there is a fault at a corresponding temperature or not. We also describe the malfunction with a simple formula that considers the described threshold values and temperature ranges.

vorbereitete Testdaten

Figure 2: Prepared test and training data

We then convert this formula-based evaluation into an ASCII-based file (CSV format) and thus remove all formulas and only retain the values and the statement as to whether there is an engine error. This step completes the preparation of the training and test data.

Using the Azure Machine Learning Studio

Azure Machine Learning Studio is a powerful cloud-based predictive analysis service that enables you to quickly create predictive models and deploy them as analysis solutions.

The first step in working with the Learning Studio is to create a so-called Azure Machine Learning Studio Workspace. A workspace makes it possible to create and manage machine learning experiments and predictive web services. Several workspaces can be created, which then contain the experiments, datasets, trained prediction models, web services, etc. As the creator of a workspace, we can invite other users to share the workspace and thus provide the created solutions for predictive analysis.

Provision of the evaluation logic as web service

We used the Azure Machine Learning Studio in the step before to develop a predictive analysis model. We then make the logic available to the Azure Machine Learning Studio Web Service. The predictive web service can then easily be used by user-defined applications or BI tools such as PowerBI, Excel, Flow or similar.

Step-by-step procedure for creating the predictive web service

But let's take it one step at a time. Here is a short "step by step" guide to get the required result and make a prediction of the failure of an engine depending on the temperature.

First we need an Azure Account. This can be set up as a demo account on the  https://azure.microsoft.com/en-us/ -Portal, free of charge. Afterwards we can Login via - https://portal.azure.com. A dashboard will open from which all Azure services can be viewed.

On the left side we can then create a [Create Ressource]. Here we select [Machine Learning Studio Workspace].

Arbeitsbereich in Azure Machine Learning anlegen

Figure 3: Create workspace

For the creation of the workspace, some information such as a storage account must be provided. In this configuration view, all data can be entered directly or missing areas can be created. For a quick start, it is recommended that the suggested Settings be accepted. After everything is configured, the workspace is created. The progress can be viewed under [Notifications] and the newly created resource can be opened directly on completion.

Azure Machine Learning Workspace

Figure 4: Creating and opening a Machine Learning Workspace

After opening the resource you can start the [Machine Learning Studio] (see picture gallery).

Figure 5-8: Picture Gallery on how to start the Machine Learning Studio

After the data sets are available in Azure, we create a new [Experiment]. For this, click on [New] and create an empty experiment [Blank Experiment]. Then the workspace should appear as shown in the following figure.

Experiment created in Azure

Figure 9: Experiment created

Now we can add function blocks to our experiment using the drag and drop functions. The available blocks can be selected from a treeview on the left side and are simply "dragged" into the workspace.

Populating the experiment

Figure 10: Populating the experiment

First, we add our test data to the experiment. In the next step we divide the data into two areas using [Split Data]. One half of the data is used to train the model. The other datasets are used to test the trained model. The percentage divided into test and training data can be set as a rule of thumb with 70 percent training data and 30 percent test data. The next step is to select a suitable algorithm. In our case we select the [Two-Class Boosted Decision Tree].

What is a Decision Tree?
Learning decision trees uses a decision tree (as a predictive model) to go from observations about an element (represented in the branches) to conclusions about the target value of the element (represented in the sheets). It is one of the predictive modeling approaches for statistics, data mining and machine learning.

We then connect the data points at the edges of the function blocks with each other: We connect the output of the DataSet with our module to divide it into training and test data. Next, we add the training data (output of the split-data module) and our chosen algorithm to our training model. Then we have to tell the training model which column to "train".

Specify column for training evaluation

Figure 11: Specify column for training evaluation

After successful selection, the trained model and the test data derived from the dataset are evaluated in the function module [Score Model] for reliability of the prediction. In order to do this, the model must first be calculated. This is done by right-clicking on the Score Model block and executing the [Run selected] function.

Calculate model

Figure 12: Calculate model

Shortly later, the calculation result is available and can be viewed with [Scored dataset / Visualize].

Comparison of training data with forecast data

Figure 13: Comparison of training data with forecast data [scored labels]

Provision of the result

The result can then also be made available via WebService.

Deploying WebService

Figure 14: Deploying WebService

To do this, click on the [Set Up WebService] button. Thereupon a possibility for the input of temperature values via web interface is created. The work area is automatically extended by a [Predicitve experiment] tab. The automatically added function block [WebService input] can now be seen here as the input variable for the function block [Score Model].

Web Service Output

Figure 15: Web Service Output

Furthermore, based on the model data, the failure of the engine is predicted. The data can then be queried using the [WebService output]. To make this procedure available, a calculation run using [Run] is necessary and then the [Deploy WebService] must be called.

Deploy WebService

Figure 16: Deploy WebService

Once this has been successfully completed, temperature values can be entered to query whether a motor failure will occur on the basis of this trained model.

Calling the WebService

Figure 17: Calling the WebService Input via test page [automatically generated]

Via the automatically generated test page, the input data can be entered via [Test] - in this case the temperature. The response of the Web Service Output can then be viewed in the lower area.

Uncritical temperature value

Figure 18: Uncritical temperature value

For the demo query, 10 degrees were entered here. Then you can read the value "FALSE" in the third column. This means that no failure is predicted at this temperature.

In the next example a temperature value of 4 degrees is selected. You can now read a "TRUE" in the result, which indicates a failure.

kritischer temperaturwert ki

Figure 19: Critical temperature value

Conclusion

It is very easy to take your first steps in machine learning with the Azure Machine Learning Studio. It has been shown how to teach and make artificial intelligence elements available by simply configuring available function blocks without in-depth programming knowledge. The convenient provision of WebServices also facilitates the integration of forecast results into the applications. In further articles we will focus on different algorithms, as this article is only intended to give a first impression. The areas of monitored and unmonitored learning will also be covered in further articles.

We are happy to take up suggestions for topics and articles and deal with them.

Feature Image source: https://pixabay.com/de/netz-netzwerk-programmierung-3706562/


]]>
https://www.swms.de/en/blog/how-do-you-teach-artificial-intelligence-ai-to-work-with-azure-machine-learning/feed/ 0 https://www.swms.de/favicon.ico https://www.swms.de/favicon.ico
Azure IoT-Hub and Microsoft Flow https://www.swms.de/en/blog/azure-iot-hub-and-microsoft-flow/ https://www.swms.de/en/blog/azure-iot-hub-and-microsoft-flow/#comments Fri, 21 Dec 2018 15:47:00 +0000 IoT https://www.swms.de/en/blog/azure-iot-hub-and-microsoft-flow/ Weiterlesen

]]>
What is Azure IoT-Hub?

Microsoft's Azure IoT Hub is a cloud service for the secure management of Internet of Things (IoT) devices. Sending messages or telemetry data to the Azure IoT-Hub is achieved through the use of appropriate software. However, this can also be done with the help of Microsoft Flow.

Microsoft's Azure IoT-Hub is the scalable transport layer for connecting, monitoring and managing Internet of Things devices. There are countless examples and solutions to send (test) data to the IoT-Hub. Nearly all of these approaches have the disadvantage that you have to deal with, corresponding to single-board computers and/or the programming of simulators, which can be very time-consuming.

The task is to find a simple method to send data (messages) to the Azure IoT-Hub without much effort.

The creation of an IoT-Hub and related devices in Microsoft Azure is not discussed in this article. For more information, visit: 

https://azure.microsoft.com/en-us/services/iot-hub/

What does the concrete solution look like?

The sending of events and messages with HTTP via REST (Representational State Transfer - programming interface/programming paradigm) seems to be a favorable procedure here. Using the POST method (for transmission), a HTTP request can be sent to the IoT hub in order to transport the data from the body. The structure of the URI to be used is as follows:

https://{IoT-HubID}.azure-devices.net/devices/{DeviceID}/messages/events?api-version=2016-02-02

IoT-HubID is the unique name of the used Azure IoT-Hub and DeviceID is the unique device ID. The body of the call contains the possible value pairs according to the following scheme:

Body-Structure for the value pairs

Body-Structure Body-Example

{
"Attribut_1": "Messwert_1",
"Attribut_n": "Messwert_n"
}

{
"Ort": "Oldenburg",
"Temperatur": "12,0"
}

For security reasons, a SAS token must also be sent using the header of the HTTP request. The device is then authenticated and can send data to the IoT-Hub. The generation of the SAS token can be generated with the help of a few lines of code or better, with the Azure Device Explorer.

The IoT-Hub Connection String must simply be entered in the Device Explorer twin. This can be found in the Azure Portal under the desired IoT-Hub in the "Settings / Guidelines for shared access" area:

IoT-Hub Richtlinie

Fig. 1: IoT-Hub – Shared access policies

The "Connection string - primary key" is entered in the Device Explorer as follows:

Fig. 2: Device Explorer Twin – Connection string - Primary key

In the Device Explorer under the tab "Management" you can find all devices registered in the IoT-Hub after a [Refresh].

Fig. 3: Device Explorer Twin – Device Management

The [SAS Token...] button can then be used to generate the SAS token by specifying the validity period (TTL Days). We only need the marked part of the generated text. This can simply be copied to the clipboard for further use.

Fig. 4: Device Explorer – SAS Token Form

For the HTTP request, we now have the URI with IoT HubID and DeviceID, the header with the generated SAS token, and the body with the corresponding measurement data.

How to send messages using Microsoft Flow?

Microsoft Flow can be used to assemble all parts and send data to the IoT hub. With Microsoft Flow it is possible to create and manage automated workflows. Further information about Microsoft Flow can be found at the following link:

https://emea.flow.microsoft.com/en-us/

We only need two connectors here. One flow button connector and one HTTP connector. The advantage of this connector combination is that the flow app can also be used on the mobile phone to record and send measured values.

The overall configuration for our flow example "AzureIoTHubTest" thus looks like this:

Microsoft-Flow

Fig. 5: Microsoft Flow – “AzureIoTHubTest”

The Button Connector is configured as follows:

Schaltflächen Connector

Fig. 6: Button Connector

As an example, two fields ("Status" and "Temperature") are added, where "Status" contains a list of possible options and "Temperature" is a free input field. By default, the flow button also provides the location, user, timestamp, etc.

In the next connector everything is put together:

HTTP Connector

Fig. 7: HTTP Connector

POST is selected as the method for the HTTP request. The URI, specified as described above, the generated SAS token is entered in the header and the body is assigned the required value pairs.

If the flow is now started via mobile phone or the MS-Flow-Dashboard, the successful workflow run can be tracked in the course of the process.

Erfolgreich ausgeführter Flow

Fig. 8: Successfully executed flow

The values of the button passed in the body can also be seen here.

On the Azure IoT Hub overview page, the reception of the message is displayed:

Prüfung der Nutzung im IoT-Hub Dashboard Fig. 9: Testing the usage in the IoT Hub Dashboard

Conclusion

Sending messages to the Azure IoT hub is easy with Microsoft Flow. No matter if you want to send test data to the IoT-Hub quickly or if you want to collect real data via mobile phone in the field. The combination of Azure IoT-Hub, Device-Explorer and Microsoft Flow allows a fast and comfortable realization of these applications.

If IoT devices are to send data to the IoT hub, corresponding code snippets must be used for the authentication and transmission of telemetry data. Here, however, a corresponding but manageable development effort is necessary. This would, however, also regulate bidirectional communication between IoT devices and the solution backend. At the end the data has to be analyzed. Also here Microsoft Azure offers corresponding services such as Stream Analytics and Machine Learning.

If you don't want to orchestrate these services yourself, you are well advised to use the pre-configured solutions from the Azure IoT Solution Accelerators. There are solutions for remote monitoring, maintenance solutions, solutions for the connected factory and various device simulations. The advantage of these preconfigured solutions is that the used Azure services can be adapted and extended by the user as if the services were added manually in Azure.

Another IoT service in Azure is IoT-Central. IoT-Central is a fully managed SaaS (Software as a Service) solution that makes it easy to link, manage and monitor IoT resources. Setting up this service is very simple, so the effort is limited.

Overall, Microsoft Azure IoT provides a comprehensive collection of services and solutions for creating end-to-end IoT applications. From fully hosted and managed to customizable services that can be tailored to meet the specific needs of each user's industry.


Titelbildnachweis: https://pixabay.com/de/netzwerk-iot-internet-der-dinge-782707/

]]>
https://www.swms.de/en/blog/azure-iot-hub-and-microsoft-flow/feed/ 0 https://www.swms.de/favicon.ico https://www.swms.de/favicon.ico
Innovative 5-Axis deburring https://www.swms.de/en/blog/innovative-5-axis-deburring/ https://www.swms.de/en/blog/innovative-5-axis-deburring/#comments Tue, 16 Oct 2018 03:26:00 +0000 plm https://www.swms.de/en/blog/innovative-5-axis-deburring/ Weiterlesen

]]>
The solution for automated deburring with Siemens NX: The Automated Deburring Module from SWMS in use. 

Our video shows the application of the deburring strategy on a machine tool Spinner U-1520 with Heidenhain ITNC 530. 

In addition to the full support of 5-axis simultaneous movements, the particularly uniform shape of the deburring bevel is clearly visible. 

Innovative 5-Axis Deburring - Check out our Automated Deburring Module.



]]>
https://www.swms.de/en/blog/innovative-5-axis-deburring/feed/ 0 https://www.swms.de/favicon.ico https://www.swms.de/favicon.ico
Potentials and risks of the cloud https://www.swms.de/en/blog/potentials-and-risks-of-the-cloud/ https://www.swms.de/en/blog/potentials-and-risks-of-the-cloud/#comments Tue, 16 Oct 2018 03:23:00 +0000 consulting en https://www.swms.de/en/blog/potentials-and-risks-of-the-cloud/ Weiterlesen

]]>
How SMEs benefit from modern IT

Technically speaking, the cloud has long been established in modern corporate IT, but especially small and medium-sized enterprises (SMEs) still see some uncertainties regarding the potentials, and above all, the potential risks of a cloud strategy. Therefore, they often shy away from opening up the obvious potential.

Compared with the in-house operation of IT systems, the use of IT resources from the cloud promises to save up to 50% of the infrastructure costs in the area considered. But the direct cost aspect is not the only argument for a cloud strategy. Simplified access to tailor-made IT resources has great potential for increasing the efficiency of processes through further digitization to completely new possibilities and models in order to offer products and services in line with the market and to bind customers individually to their own company.

In our whitepaper, we explain the cloud concept briefly and clearly and address the most frequent reservations of a cloud strategy.


Download PDF

]]>
https://www.swms.de/en/blog/potentials-and-risks-of-the-cloud/feed/ 0 https://www.swms.de/favicon.ico https://www.swms.de/favicon.ico
From MS Access to Web App https://www.swms.de/en/blog/from-ms-access-to-web-app/ https://www.swms.de/en/blog/from-ms-access-to-web-app/#comments Tue, 16 Oct 2018 03:22:00 +0000 consulting en https://www.swms.de/en/blog/from-ms-access-to-web-app/ Weiterlesen

]]>
Strategies for modernizing enterprise tools

Where spreadsheets are no longer sufficient, many companies have developed small software tools that are still used today to manage all possible data from different processes. With a tool like MS Access, it was not only possible for developers, but also for ambitious users to create data-based applications. Not infrequently, small data-based software tools were commissioned for specific questions. The general areas in which such tools are used range from simple address databases to documentation systems for specific processes to complete enterprise applications.

Not infrequently, long-term software tools are becoming increasingly difficult to maintain. Developers and key users are no longer available, new program versions or operating systems are no longer supported, or user requirements are changing. With web-based applications, the growing demands on data-based software tools can be met.

In our white paper, learn how you can retain the benefits of custom software tools and address the increased needs with web-based applications.


Download PDF

]]>
https://www.swms.de/en/blog/from-ms-access-to-web-app/feed/ 0 https://www.swms.de/favicon.ico https://www.swms.de/favicon.ico
Smart Fastener(s) with ADT? https://www.swms.de/en/blog/smart-fastener-s-with-adt/ https://www.swms.de/en/blog/smart-fastener-s-with-adt/#comments Tue, 16 Oct 2018 03:18:00 +0000 plm https://www.swms.de/en/blog/smart-fastener-s-with-adt/ Weiterlesen

]]>
Error-free data for Fastener Positions

What can ADT do?

The Attribute Definition Tool (ADT) Software allows not only the pure geometric consideration but also an automated introduction of riveting and drilling information technology into CATIA V5 models.

Consistent data management is important here. An airplane for example consists of many individual components and therefore a considerable amount of rivet positions come together. It would be efficient to bring these simply and most importantly fully automated into the component: Reliable-, Consistent- and Automated- processes.

Exactly these requirements are fulfilled by the ADT (Attribute Definition Tool) Software. Designed for the aerospace industry, this is the right tool to introduce fastener designs and manufacturing attributes into the CATIA V5 model.

How do i achieve consistent data management?

SWMS faced problems with the consistent data management of fastener information which included the early error detection and time savings.

Problem:

Within a big assembly, thousands of fastening positions are to be provided with process information. These include for example, the layer information such as material and thickness, sealant used, preceding processes such as pre-drilling of specific positions and much more. Furthermore, a consistent Workflow for improved product quality and the prevention of unnecessary costs should be generated. In addition, the software should be seamlessly integrated into the existing processes.


A custom model structure in ADT

The challenge here is the parallel process strings. During the construction and further development of assemblies, the programming of the production machines starts. The toolset contains two different modules: On the one hand ADT-E (Engineering) and on the other hand ADT-M (Manufacturing). The individual modules are tailored to the respective requirements of engineering and manufacturing. The communication between the two modules takes place exclusively in CATIA V5 models used. For the purpose, a separate model structure was developed which allows the creation of a connection between engineering and manufacturing and is integrated in the assembly structure.


Bild11


In every assembly processed with ADT, the software creates a component to be processed. The created component contains all geometries and parameters generated by the software and represents the connection between engineering and manufacturing.

Engineering model structure

When using ADT-E, a CATIA V5 Part is created in the software created component. In the part, all fasteners defined by the designer in the software are created with geometry and parameters. The geometry is not associative. An update is provided via the unique identification by a version number.

Manufacturing model structure

For manufacturing, an independent manufacturing model is created for each assembly to be manufactured. The manufacturing model contains the component to be manufactured with the component generated by ADT-E as well as the lifecycle generated by ADT-M. In this case, the module of the engineering can only be accessed with reading privileges.

What can the modules do?


Blog 18 04 Bild2 adt engineering


The engineering module has all the functions needed to define the fastener. Thus, all fastener-specific information can be configured here and the transferred to all selected fastener positions in the CATIA model. After this step has been taken, the positions to be defined can be imported from various CAD model into ADT. Either individual positions can be imported directly from the model or many points can be imported from the GeoSets. The “Geometrical Set Selection” option is available for this purpose. Different options can be selected from here:

  • Include Sub Geometrical Sets
  • Points
  • Lines
  • Pattern
  • Sketches

When this process is completed, a tabular view of all imported positions with the most important information such as Point name, Fastener type, Coordinates and Vector orientation, as well as all relevant results from the stack calculation. If all this information is correct, it can be transferred to the CATIA model with one click. A subsequent change of all parameters every time is of course possible. After importing all process information from ADT into the model, CATParts will be positioned at each point automatically for better visualization and according to the selected fastener type.

Once all engineering information has been entered into the model, manufacturing can start. First, the engineering product is loaded so that the current status is always available. Even starting from this step, there is always the possibility to compare the engineering environment with that of the manufacturing.

The next step is to generated lifecycles.


Blog 18 04 Bild3adt manufacturing


Blog 18 04 Bild4


The results can be added to the selected lifecycle via the context menu.


Blog 18 04 Bild5


In the manufacturing module, just like in the engineering moduel, all the items can be loaded by selection from the CATIA V5 model.

If the desired positions have been imported for each lifecycle, they can further be processed via the context menu. For example, processing information can be added per position or the lifecycle can be displayed for each fastener position in which it was developed. It is also possible to change the properties of a fastener position. 


Conclusion

With ADT, you have the ability to provide a structured and largely automated definition of fastener design and manufacturing attributes in CATIA V5. This is how the intelligent rivet(s) come with ADT.

Customizing processes and Workflows are no problems.

Contact us for more information.

Featured Image Source: http://fotos.felixbecker.eu/#!album-17-37

]]>
https://www.swms.de/en/blog/smart-fastener-s-with-adt/feed/ 1 https://www.swms.de/favicon.ico https://www.swms.de/favicon.ico
Myths Surrounding Unit Tests https://www.swms.de/en/blog/myths-surrounding-unit-tests/ https://www.swms.de/en/blog/myths-surrounding-unit-tests/#comments Tue, 16 Oct 2018 03:12:00 +0000 software-development https://www.swms.de/en/blog/myths-surrounding-unit-tests/ Weiterlesen

]]>
Why Unit Tests?

Testing products before delivery is not new. Machines of any kind are extensively tested for stability, running time and safety before they reach the customer. So, it’s easy to test properties of physical products. But how can software be tested?

The software technology differentiates in various testing types. In this blog entry, we will limit ourselves to one of the common types of tests: Unit Tests. They are easy to use and maintain at relatively low cost and thus represents a major development support.

Unit Tests, in their most primitive form verify, that a tiny unit of the whole software will return expected output parameters from given input. Unit tests belong to the so-called “black box” tests. It is not necessary to understand the internal functionality of code part that is tested, if the defined expectations are fulfilled

Advantages

A key benefit of unit testing is that they exist during the development of the software, helping to detect errors in the early stages. In phases of development the program does not have to be fully developed – individual modules of the software can be tested in isolation. When adjustments are made, for example due to customer change requests, it can be continuously verified that functions work correctly and changes cause no unexpected side effects. If several developers work on one software product, they can constantly check that their own changes have no unexpected (negative) side effects on other parts.

Especially for larger projects, the overhead of writing unit tests is initially considered inefficient. In the long run, it is worth having unit tests in a project from the beginning. Due to the automated execution of these tests, it helps to ensure that on further extension of the software, it runs without errors.

A general rule: The later one bug is found, the more expensive is the correction.


AdvantagesUnitTests


Problems and reasons against unit tests?

Below we provide answers to common problems and reasons that speak against creating unit tests.


DevUnitTests


This method is too complicated to test

If a test fails for a complicated method, it is even more difficult to find the cause of the problem. So, if a method is so extensive and covers multiple functionalities, these different functions should be outsourced. These smaller software parts can be tested individually.

In general, the more complex the functionality, the more important are tests. Unit tests help to understand the methods and above all they support the maintainability of the program.

How should a test detect errors in my functions if I write both myself?

A common question for topic of unit tests. Unit tests just support the development process, because the developer actively considers possible scenarios. One guideline that can be followed is that unit tests should be written from the perspective of a product user. It does not test the implementation, but the desired behavior of a functionality.

Furthermore, experience has shown that frequently invalid inputs are tested. Often in manual testing scenarios, the developer only thinks about what a desired result is. It is also interesting to see how applications respond to unanticipated inputs, such as performing an action on non-existent data.

Preparing the tests is more complicated than what I want to test.

There are several reasons why preparing a unit test can be complicated. On the one hand, the creation of the test data can be extensive. In this case, it makes sense to provide test data centrally for all unit tests, as they can be used for further tests, too. This often happens when writing tests for functions that access data from an external data source.

On the other hand, some methods are difficult to test in isolation because of its surroundings.

Here, the consideration to outsource the method may be useful.

I have no time writing unit tests

Under pressure, developers often just see the extra work of writing some unit tests and forget the actual benefit. Especially in time of quick feature development, more common mistakes are made. That is why unit testing is even important in these times. The following picture shows a vicious cycle of not writing unit tests:

ViciousCircle


Conclusion

Unit tests are excellent to test software during development and thus significantly increase software quality. Preventing errors in all modules can consistently ensure that the result contains less errors and is manageable. Writing the unit tests from the beginning, help reduce the extra work involved. This makes sure that care is taken that the individual functions can be tested as standalone functions.

From experience, developers notice right after a period of time the enormous value of unit tests, in which they feel more comfortable with changes or further developments.

Featured Image Source: https://pixabay.com/de/checkliste-umfrage-h%C3%A4ckchen-ok-2320130/


]]>
https://www.swms.de/en/blog/myths-surrounding-unit-tests/feed/ 0 https://www.swms.de/favicon.ico https://www.swms.de/favicon.ico