Blog

Blog articles pertaining to Data Science, Machine Learning, Hardware, Software and Personal musings

In the first and second part of this series I described how to set up the hardware to read data from the OBD port using a Raspberry Pi and then upload it to the cloud. You should read the first part and second part of this series before you read this article.

Having done all the work to capture and transport all the data to the cloud, let us figure out what can be done on the cloud to introduce Machine Learning. To understand the concepts given in this article you will need to be familiar with Javascript and Python. Also, I am using MongoDB as my database - so you will need to know the basics of a document-oriented database to follow the example code here. MongoDB is not only my storage engine here, but also my compute engine. By that, I mean that I am using the database's scripting language (Javascript) to pre-process data that will ultimately be used for machine learning. The Javascript code given herein executes (in a distributed manner) inside the MongoDB database. (Some people get confused when they see Javascript, assuming that it requires a server like NodeJS to run - not here.)

Introduction to the Solution

To set the introduction, I will describe the following three tasks in this article:

  1. Read the raw records and augment it with additional derived information to generate some extra features used for machine learning that is not directly sent by the Raspberry Pi. Derivatives based on time interval between readings can be used to derive instanteous velocity, angular velocity and incline. These are inserted into the derived records when we save it to another MongoDB collection (also known as 'table' in relational database parlance) to be used later for generating the feature sets. I will be calling this collection 'mldataset' in my database.
  2. Read the 'mldataset' and extract features from the records. The feature set is saved into another collection called 'vehicle_signature_records'. This is an involved process since there are so many fields found in the raw records. In my case, the feature sets are basically three statistical averages (minimum, average and maximum) of all values aggregated over a 15 second period. The other research papers on this subject take the same approach, but the time interval over which the aggregates are taken vary based on the frequency of the readings. The recommended frequency is 5 Hz i.e. 1 record-set per 0.2 second. But as I mentioned in article 2 of this series, we are unable to read data that fast on a serial connection over ELM 327. The maximum speed that I have been able to observe (mostly in modern cars) is 1 record-set in 0.7 seconds. Thus a 15 second aggregation makes more sense in our scenario. Due to this, the accuracy of the prediction may be affected - but we will accept that as a constraint. The solution methodology remains the same though.
  3. Apply a learning algorithm on the feature-set to learn the driver behavior. Then the model needs to be deployed on a machine in the cloud. In real-time we need to calculate the same aggregates over the same time interval (15 seconds) and feed it into the model to come up with a prediction. To confirm the driver we will need to take readings over several intervals (5 minutes will give 20 predictions) and then use the value with the maximum count (i.e. modal value).

Augmenting raw data with derivatives

This is a very common scenario in IoT applications. When generated data comes from a human being, it always has useful information at the surface. All you need to do is scan the logs and extract it. An example of this is finding the interests of the user based on user-clicks in a shopping cart scenario - all the items that the user has seen on the web-site are directly found in the logs. However an IoT device is dumb, has no emotion, has no special interests. All data coming from an IoT device is the same consistent boring stream. So where do you dig to find useful information? The answer to this question is in the time-derivatives. The variation in values from one reading and the other provides useful insight into the behavior. Examples of these are velocity (derivative of displacement found from GPS readings), acceleration (derivative of velocity) and jerk (derivative of acceleration). So you see, augmenting raw data to put this extra information is extremely useful for IoT applications.

In this step I am going to write some Javascript code (that runs inside the MongoDB database) to augment raw data with derivatives for each record. You will find all this code in the file 'extract_driver_features_from_car_readings.js' which is located inside the 'machinelearning' folder. If you are wondering where to find the code, it is in Github at this location https://github.com/anupambagchi/driver-signature-raspberry-pi.

Processing raw records

Before diving into the code, let me clarify a few things. The code is written to run as a cronjob on a machine on the same network as the MongoDB database - so that it is accessible. Since it runs as a cron task, we need to know how many records to process from the raw data table. Thus we need to do some book-keeping on the database. We have a special collection called 'book_keeping' for this purpose where we store some book-keeping information. One of them is the last date till when we have processed the records. The data (in JSON format) may look like this:

  1. {
  2. "_id" : "processed_until",
  3. "lastEndTime" : ISODate("2017-12-08T23:56:56.724+0000")
  4. }

To determine where we need to pick up the record processing from, here is one way to do this in a MongoDB script written in Javascript.

  1. // Look up the book-keeping table to figure out the time from where we need to pick up
  2. var processedUntil = db.getCollection('book_keeping').findOne( { _id: "processed_until" } );
  3. endTime = new Date(); // Set the end time to the current time
  4. if (processedUntil != null) {
  5. startTime = processedUntil.lastEndTime;
  6. } else {
  7. db.book_keeping.insert( { _id: "processed_until", lastEndTime: endTime } );
  8. startTime = new Date(endTime.getTime() - (365*86400000)); // Go back 365 days
  9. }

The 'else' part of the logic above is for the initialization phase when we run it for the first time - we just want to pick up all records for the past year.

Keeping track of driver vehicle combination

Another book-keeping task is to keep track of the driver-vehicle combinations. To make the job easier for the machine learning algorithm, this should be converted to indices. Those indices are maintained in the database in another book-keeping collection called 'driver_vehicles'. This collection looks somewhat like this:

  1. {
  2. "_id" : "driver_vehicles",
  3. "drivers" : {
  4. "gmc-denali-2015_jonathan" : 0.0,
  5. "gmc-denali-2015_mindy" : 1.0,
  6. "gmc-denali-2015_charles" : 2.0,
  7. "gmc-denali-2015_chris" : 3.0,
  8. "gmc-denali-2015_elise" : 4.0,
  9. "gmc-denali-2015_thomas" : 5.0,
  10. "toyota-camry-2009_alice" : 6.0,
  11. "gmc-denali-2015_andrew" : 7.0,
  12. "toyota-highlander-2005_arka" : 8.0,
  13. "subaru-outback-2015_john" : 9.0,
  14. "gmc-denali-2015_grant" : 10.0,
  15. "gmc-denali-2015_avni" : 11.0,
  16. "toyota-highlander-2005_anupam" : 12.0,
  17. }
  18. }

These are the names of vehicles with their drivers. Each combination has been assigned a number against it. When a driver-vehicle combination is encountered, the program looks to see if that combination already exists or not. If not, then it adds a new combination. Here is the code to do it.

  1. // Another book-keeping task is to read the driver-vehicle hash-table from the database
  2. // Look up the book-keeping table to figure out the previous driver-vehicle codes (we have
  3. // numbers representing the combination of drivers and vehicles).
  4. var driverVehicles = db.getCollection('book_keeping').findOne( { _id: "driver_vehicles" } );
  5. var drivers;
  6. if (driverVehicles != null)
  7. drivers = driverVehicles.drivers;
  8. else
  9. drivers = {};
  10.  
  11. var maxDriverVehicleId = 0;
  12. for (var key in drivers) {
  13. if (drivers.hasOwnProperty(key)) {
  14. maxDriverVehicleId = Math.max(maxDriverVehicleId, drivers[key]);
  15. }
  16. }

You can see that a 'find' call to the MongoDb database is being made to read the hash-table in memory.

The next task is to query the raw collection to find out which records are new since the last time it ran.

NOTE:  In the code segments below the dollar symbol will show up as '@'. Please make the appropriate substitutions when you read it. The correct code may be found in the github repository.

  1. // Now do a query of the database to find out what records are new since we ran it last
  2. var allNewCarDrivers = db.getCollection('car_readings').aggregate([
  3. {
  4. "@match": {
  5. "timestamp" : { @gt: startTimeStr, @lte: endTimeStr }
  6. }
  7. },
  8. {
  9. "@unwind": "@data"
  10. },
  11. {
  12. "@group": { _id: "@data.username" }
  13. }
  14. ]);

SQL vs. Document-oriented database

The next part is the crux of the actual process happening in this script. To understand how this works you need to be familiar with the MongoDB aggregation framework. A task like this will take way too much code if you start writing SQL. Most relational databases and also Spark offer SQL as a way to process and aggregate their data. The most common reason I have heard from managers to take that approach is - "it is easy". That works, but it is too verbose. That is why I personally prefer to use the aggregation framework of MongoDB to do my pre-processing since I can operate much faster than the other tools out there. It may not be "easy" as per the common belief, but a bit more effort in studying the aggregation framework pays off - saving a lot of development effort.

What about execution time? These scripts execute on the database nodes - inside the database. Thus you cannot make it any faster - since most of the time spent in dealing with large data is in transporting the data from the storage nodes to the execution nodes. In the case of the aggregation frameworks, you are getting all benefits of BigData for free. You are actually using in-database analytics here for the fastest execution time.

  1. // Look at all the records returned and process each driver one-by-one
  2. // The following query is a pipeline with the following steps:
  3. allNewCarDrivers.forEach(function(driverId) {
  4. var driverName = driverId._id;
  5. print("Processing driver: " + driverName);
  6. var allNewCarReadings = db.getCollection('car_readings').aggregate([
  7. {
  8. "@match": { // 1. Match all records that fall within the time range we have decided to use. Note that this is being
  9. // done on a live database - which means that new data is coming in while we are trying to analyze it.
  10. // Thus we have to pin both the starting time and the ending time. Pinning the endtime to the starting time
  11. // of the application ensures that we will be accurately picking up only the NEW records when the program
  12. // runs again the next time.
  13. "timestamp" : { @gt: startTimeStr, @lte: endTimeStr }
  14. }
  15. },
  16. {
  17. @project: { // We only need to consider a few fields for our analysis. This eliminates the summaries from our analysis.
  18. "timestamp": 1,
  19. "data" : 1,
  20. "account": 1,
  21. "_id": 0
  22. }
  23. },
  24. {
  25. @unwind: "@data" // Flatten out all records into one gigantic array of records
  26. },
  27. {
  28. @match: {
  29. "data.username": driverName // Only consider the records for this specific driver, ignoring all others.
  30. }
  31. },
  32. {
  33. @sort: {
  34. "data.eventtime": 1 // Finally sort the data based on eventtime in ascending order
  35. }
  36. }
  37. ]);

This nifty script above does a lot of things. The first thing to note in this script is that we are operating on a live database that has a constant stream of data coming in. Thus in order to select some records for processing we need to decide the time range first and only select those that fall within that time range. The next time we run this script, the records that could not be picked up this time, will be gathered and processed. This is all being done within the 'match' clause.

The second clause is the 'project' clause - which only selects the four required fields for the next stage of the pipeline. The 'unwind' clause flattens all arrays. The next 'match' clause select the driver name and the final 'sort' clause sorts the data by eventtime in ascending order.

Distance on earth between two points

Before proceeding further, I would like to get one thing out of the way. Since we are dealing with a lot of latitude-longitude pairs and subsequently trying to find displacement, velocity and acceleration, we need a way to calculate the distance between two points on earth. There are several algorithms with varying degree of accuracy, but this is the one I have found to be computationally accurate (if you do not have an algorithm already provided by the database vendor).

  1. function earth_distance_havesine(lat1, lon1, lat2, lon2, unit) {
  2. var radius = 3959; // miles
  3. var phi1 = lat1.toRadians();
  4. var phi2 = lat2.toRadians();
  5. var delphi = (lat2-lat1).toRadians();
  6. var dellambda = (lon2-lon1).toRadians();
  7.  
  8. var a = Math.sin(delphi/2) * Math.sin(delphi/2) +
  9. Math.cos(phi1) * Math.cos(phi2) *
  10. Math.sin(dellambda/2) * Math.sin(dellambda/2);
  11. var c = 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1-a));
  12.  
  13. var dist = radius * c;
  14. if (unit=="K") { dist = dist * 1.609344 }
  15. if (unit=="N") { dist = dist * 0.8684 }
  16. return dist;
  17. }

We will be using this function in the next analysis. As I said before, our goal is to augment our device records with extra information pertaining to time-derivatives. The following code adds extra fields "interval", "acceleration", "angular_velocity" and "incline" to each device record by comparing it with the preceeding record.

  1. var lastRecord = null; // We create a variable to remember what was the last record processed
  2.  
  3. var numProcessedRecords = 0;
  4. allNewCarReadings.forEach(function(record) {
  5. // Here we are reading a raw record from the car_readings collection, and then enhancing it with a few more
  6. // variables. These are the (1) id of the driver-vehicle combination and (2) the delta values between current and previous record
  7. numProcessedRecords += 1; // This is just for printing number of processed records when the program is running
  8. var lastTime; // This is the timestamp of the last record
  9. if (lastRecord !== null) {
  10. lastTime = lastRecord.data.eventtime;
  11. } else {
  12. lastTime = "";
  13. }
  14. var eventTime = record.data.eventtime;
  15. record.data.eventTimestamp = new Date(record.data.eventtime+'Z'); // Creating a real timestamp from an ISO string (without the trailing 'Z')
  16. // print('Eventtime = ' + eventTime);
  17. if (eventTime !== lastTime) { // this must be a new record
  18. var driverVehicle = record.data.vehicle + "_" + record.data.username;
  19. if (drivers.hasOwnProperty(driverVehicle))
  20. record.driverVehicleId = drivers[driverVehicle];
  21. else {
  22. drivers[driverVehicle] = maxDriverVehicleId;
  23. record.driverVehicleId = maxDriverVehicleId;
  24. maxDriverVehicleId += 1;
  25. }
  26.  
  27. record.delta = {}; // delta stores the difference between the current record and the previous record
  28. if (lastRecord !== null) {
  29. var timeDifference = record.data.eventTimestamp.getTime() - lastRecord.data.eventTimestamp.getTime(); // in milliseconds
  30. record.delta["distance"] = earth_distance_havesine(
  31. record.data.location.coordinates[1],
  32. record.data.location.coordinates[0],
  33. lastRecord.data.location.coordinates[1],
  34. lastRecord.data.location.coordinates[0],
  35. "K");
  36. if (timeDifference < 60000) {
  37. // if time difference is less than 60 seconds, only then can we consider it as part of the same session
  38. // print(JSON.stringify(lastRecord.data));
  39. record.delta["interval"] = timeDifference;
  40. record.delta["acceleration"] = 1000 * (record.data.speed - lastRecord.data.speed) / timeDifference;
  41. record.delta["angular_velocity"] = (record.data.heading - lastRecord.data.heading) / timeDifference;
  42. record.delta["incline"] = (record.data.altitude - lastRecord.data.altitude) / timeDifference;
  43. } else {
  44. // otherwise this is a new session. So we still store the records, but the delta calculation is all set to zero.
  45. record.delta["interval"] = timeDifference;
  46. record.delta["acceleration"] = 0.0;
  47. record.delta["angular_velocity"] = 0.0;
  48. record.delta["incline"] = 0.0;
  49. }
  50. db.getCollection('mldataset').insert(record);
  51. }
  52. }
  53. if (numProcessedRecords % 100 === 0)
  54. print("Processed " + numProcessedRecords + " records");
  55. lastRecord = record;
  56. });
  57. });
  58.  

Note that in line 50, I am saving the record in another collection called 'mldataset' which is going to be the collection on which I will apply feature-extraction for driver signatures. The final task is to save the book-keeping values in their respective tables.

  1. db.book_keeping.update(
  2. { _id: "driver_vehicles"},
  3. { \(set: { drivers: drivers } },
  4. { upsert: true }
  5. );
  6.  
  7. // Save the end time to the database
  8. db.book_keeping.update(
  9. { _id: "processed_until" },
  10. { \)set: { lastEndTime: endTime } },
  11. { upsert: true }
  12. );
  13.  

Creating the feature set for driver signatures

The next step is to create the feature sets for driver signature analysis. I do this by first reading records from the augmented collection 'mldataset' and aggregating values over every 15 minutes. For each field that contains a number (and it happens to change often), I will calculate three statistical values for each field - the minimum over the time window, the maximum and the average. Interestingly, one can also include other statistical values like variance, kertosis - but I have not tried those in my experiment yet - and is an enhancement that you can do easily.

You will find all the code in the file 'extract_features_from_mldataset.js' under the 'machinelearning' directory.

Let us do some book-keeping first.

  1. var processedUntil = db.getCollection('book_keeping').findOne( { _id: "driver_vehicles_processed_until" } );
  2. var currentTime = new Date(); // This time includes the seconds value
  3. // Set the end time (for querying) to the current time till the last whole minute, excluding seconds
  4. var endTimeGlobal = new Date(Date.UTC(currentTime.getFullYear(), currentTime.getMonth(), currentTime.getDate(), currentTime.getHours(), currentTime.getMinutes(), 0, 0))
  5.  
  6. if (processedUntil === null) {
  7. db.book_keeping.insert( { _id: "driver_vehicles_processed_until", lastEndTimes: [] } ); // initialize to an empty array
  8. }
  9.  
  10. // Now do a query of the database to find out what records are new since we ran it last
  11. var startTimeForSearchingActiveDevices = new Date(endTimeGlobal.getTime() - (200*86400000)); // Go back 200 days
  12.  
  13. // Another book-keeping task is to read the driver-vehicle hash-table from the database.
  14. // Look up the book-keeping table to figure out the previous driver-vehicle codes (we have
  15. // numbers representing the combination of drivers and vehicles).
  16. var driverVehicles = db.getCollection('book_keeping').findOne( { _id: "driver_vehicles" } );
  17. var drivers;
  18. if (driverVehicles !== null)
  19. drivers = driverVehicles.drivers;
  20. else
  21. drivers = {};
  22.  
  23. var maxDriverVehicleId = 0;
  24. for (var key in drivers) {
  25. if (drivers.hasOwnProperty(key)) {
  26. maxDriverVehicleId = Math.max(maxDriverVehicleId, drivers[key]);
  27. }
  28. }
  29.  

Using the last time stamp stored in the system, we can figure out which records are new.

  1. // Now do a query of the database to find out what records are new since we ran it last
  2. var allNewCarDrivers = db.getCollection('mldataset').aggregate([
  3. {
  4. "@match": {
  5. "data.eventTimestamp" : { @gt: startTimeForSearchingActiveDevices, @lte: endTimeGlobal }
  6. }
  7. },
  8. {
  9. "@group": { _id: "@data.username" }
  10. }
  11. ]);
  12.  

Extracting features for each driver

Now is the time to do the actual feature extraction from the data-set. Here is the entire loop:

  1. allNewCarDrivers.forEach(function(driverId) {
  2. var driverName = driverId._id;
  3. print("Processing driver: " + driverName);
  4. var startTimeForDriver = startTimeForSearchingActiveDevices; // To begin with we start with the earliest start time we care about
  5.  
  6. var driverIsNew = true;
  7. // First find out if this device already has some records processed, and has a last end time defined
  8. var lastEndTimeDevice = db.getCollection('book_keeping').find(
  9. {
  10. _id: "driver_vehicles_processed_until",
  11. "lastEndTimes.driver": driverName
  12. },
  13. {
  14. _id: 0,
  15. 'lastEndTimes.@': 1
  16. }
  17. );
  18.  
  19. lastEndTimeDevice.forEach(function(record) {
  20. startTimeForDriver = record.lastEndTimes[0].endTime;
  21. driverIsNew = false;
  22. });
  23.  
  24. //print('Starting time for driver is ' + startTimeForDriver.toISOString());
  25. //print('endTimeGlobal = ' + endTimeGlobal.toISOString());
  26.  
  27. var allNewCarReadings = db.getCollection('mldataset').aggregate([
  28. {
  29. "@match": { // 1. Match all records that fall within the time range we have decided to use. Note that this is being
  30. // done on a live database - which means that new data is coming in while we are trying to analyze it.
  31. // Thus we have to pin both the starting time and the ending time. Pinning the endtime to the starting time
  32. // of the application ensures that we will be accurately picking up only the NEW records when the program
  33. // runs again the next time.
  34. "data.eventTimestamp": {@gt: startTimeForDriver, @lte: endTimeGlobal},
  35. "data.username": driverName // Only consider the records for this specific driver, ignoring all others.
  36. }
  37. },
  38. {
  39. @project: { // We only need to consider a few fields for our analysis. This eliminates the summaries from our analysis.
  40. "data": 1,
  41. "account": 1,
  42. "delta": 1,
  43. "driverVehicleId": 1,
  44. "_id": 0
  45. }
  46. },
  47. {
  48. "@group": {
  49. "_id": {
  50. year: {@year: "@data.eventTimestamp"},
  51. month: {@month: "@data.eventTimestamp"},
  52. day: {@dayOfMonth: "@data.eventTimestamp"},
  53. hour: {@hour: "@data.eventTimestamp"},
  54. minute: {@minute: "@data.eventTimestamp"},
  55. quarter: {@mod: [{@second: "@data.eventTimestamp"}, 4]}
  56. },
  57.  
  58. "averageGPSLatitude": {@avg: {"@arrayElemAt": ["@data.location.coordinates", 1]}},
  59. "averageGPSLongitude": {@avg: {"@arrayElemAt": ["@data.location.coordinates", 0]}},
  60.  
  61. "averageLoad": {@avg: "@data.load"},
  62. "minLoad": {@min: "@data.load"},
  63. "maxLoad": {@max: "@data.load"},
  64.  
  65. "averageThrottlePosB": {@avg: "@data.abs_throttle_pos_b"},
  66. "minThrottlePosB": {@min: "@data.abs_throttle_pos_b"},
  67. "maxThrottlePosB": {@max: "@data.abs_throttle_pos_b"},
  68.  
  69. "averageRpm": {@avg: "@data.rpm"},
  70. "minRpm": {@min: "@data.rpm"},
  71. "maxRpm": {@max: "@data.rpm"},
  72.  
  73. "averageThrottlePos": {@avg: "@data.throttle_pos"},
  74. "minThrottlePos": {@min: "@data.throttle_pos"},
  75. "maxThrottlePos": {@max: "@data.throttle_pos"},
  76.  
  77. "averageIntakeAirTemp": {@avg: "@data.intake_air_temp"},
  78. "minIntakeAirTemp": {@min: "@data.intake_air_temp"},
  79. "maxIntakeAirTemp": {@max: "@data.intake_air_temp"},
  80.  
  81. "averageSpeed": {@avg: "@data.speed"},
  82. "minSpeed": {@min: "@data.speed"},
  83. "maxSpeed": {@max: "@data.speed"},
  84.  
  85. "averageAltitude": {@avg: "@data.altitude"},
  86. "minAltitude": {@min: "@data.altitude"},
  87. "maxAltitude": {@max: "@data.altitude"},
  88.  
  89. "averageCommThrottleAc": {@avg: "@data.comm_throttle_ac"},
  90. "minCommThrottleAc": {@min: "@data.comm_throttle_ac"},
  91. "maxCommThrottleAc": {@max: "@data.comm_throttle_ac"},
  92.  
  93. "averageEngineTime": {@avg: "@data.engine_time"},
  94. "minEngineTime": {@min: "@data.engine_time"},
  95. "maxEngineTime": {@max: "@data.engine_time"},
  96.  
  97. "averageAbsLoad": {@avg: "@data.abs_load"},
  98. "minAbsLoad": {@min: "@data.abs_load"},
  99. "maxAbsLoad": {@max: "@data.abs_load"},
  100.  
  101. "averageGear": {@avg: "@data.gear"},
  102. "minGear": {@min: "@data.gear"},
  103. "maxGear": {@max: "@data.gear"},
  104.  
  105. "averageRelThrottlePos": {@avg: "@data.rel_throttle_pos"},
  106. "minRelThrottlePos": {@min: "@data.rel_throttle_pos"},
  107. "maxRelThrottlePos": {@max: "@data.rel_throttle_pos"},
  108.  
  109. "averageAccPedalPosE": {@avg: "@data.acc_pedal_pos_e"},
  110. "minAccPedalPosE": {@min: "@data.acc_pedal_pos_e"},
  111. "maxAccPedalPosE": {@max: "@data.acc_pedal_pos_e"},
  112.  
  113. "averageAccPedalPosD": {@avg: "@data.acc_pedal_pos_d"},
  114. "minAccPedalPosD": {@min: "@data.acc_pedal_pos_d"},
  115. "maxAccPedalPosD": {@max: "@data.acc_pedal_pos_d"},
  116.  
  117. "averageGpsSpeed": {@avg: "@data.gps_speed"},
  118. "minGpsSpeed": {@min: "@data.gps_speed"},
  119. "maxGpsSpeed": {@max: "@data.gps_speed"},
  120.  
  121. "averageShortTermFuelTrim2": {@avg: "@data.short_term_fuel_trim_2"},
  122. "minShortTermFuelTrim2": {@min: "@data.short_term_fuel_trim_2"},
  123. "maxShortTermFuelTrim2": {@max: "@data.short_term_fuel_trim_2"},
  124.  
  125. "averageO211": {@avg: "@data.o211"},
  126. "minO211": {@min: "@data.o211"},
  127. "maxO211": {@max: "@data.o211"},
  128.  
  129. "averageO212": {@avg: "@data.o212"},
  130. "minO212": {@min: "@data.o212"},
  131. "maxO212": {@max: "@data.o212"},
  132.  
  133. "averageShortTermFuelTrim1": {@avg: "@data.short_term_fuel_trim_1"},
  134. "minShortTermFuelTrim1": {@min: "@data.short_term_fuel_trim_1"},
  135. "maxShortTermFuelTrim1": {@max: "@data.short_term_fuel_trim_1"},
  136.  
  137. "averageMaf": {@avg: "@data.maf"},
  138. "minMaf": {@min: "@data.maf"},
  139. "maxMaf": {@max: "@data.maf"},
  140.  
  141. "averageTimingAdvance": {@avg: "@data.timing_advance"},
  142. "minTimingAdvance": {@min: "@data.timing_advance"},
  143. "maxTimingAdvance": {@max: "@data.timing_advance"},
  144.  
  145. "averageClimb": {@avg: "@data.climb"},
  146. "minClimb": {@min: "@data.climb"},
  147. "maxClimb": {@max: "@data.climb"},
  148.  
  149. "averageFuelPressure": {@avg: "@data.fuel_pressure"},
  150. "minFuelPressure": {@min: "@data.fuel_pressure"},
  151. "maxFuelPressure": {@max: "@data.fuel_pressure"},
  152.  
  153. "averageTemp": {@avg: "@data.temp"},
  154. "minTemp": {@min: "@data.temp"},
  155. "maxTemp": {@max: "@data.temp"},
  156.  
  157. "averageAmbientAirTemp": {@avg: "@data.ambient_air_temp"},
  158. "minAmbientAirTemp": {@min: "@data.ambient_air_temp"},
  159. "maxAmbientAirTemp": {@max: "@data.ambient_air_temp"},
  160.  
  161. "averageManifoldPressure": {@avg: "@data.manifold_pressure"},
  162. "minManifoldPressure": {@min: "@data.manifold_pressure"},
  163. "maxManifoldPressure": {@max: "@data.manifold_pressure"},
  164.  
  165. "averageLongTermFuelTrim1": {@avg: "@data.long_term_fuel_trim_1"},
  166. "minLongTermFuelTrim1": {@min: "@data.long_term_fuel_trim_1"},
  167. "maxLongTermFuelTrim1": {@max: "@data.long_term_fuel_trim_1"},
  168.  
  169. "averageLongTermFuelTrim2": {@avg: "@data.long_term_fuel_trim_2"},
  170. "minLongTermFuelTrim2": {@min: "@data.long_term_fuel_trim_2"},
  171. "maxLongTermFuelTrim2": {@max: "@data.long_term_fuel_trim_2"},
  172.  
  173. "averageGPSAcceleration": {@avg: "@delta.acceleration"},
  174. "minGPSAcceleration": {@min: "@delta.acceleration"},
  175. "maxGPSAcceleration": {@max: "@delta.acceleration"},
  176.  
  177. "averageHeadingChange": {@avg: {@abs: "@delta.angular_velocity"}},
  178. "minHeadingChange": {@min: {@abs: "@delta.angular_velocity"}},
  179. "maxHeadingChange": {@max: {@abs: "@delta.angular_velocity"}},
  180.  
  181. "averageIncline": {@avg: "@data.incline"},
  182. "minIncline": {@min: "@data.incline"},
  183. "maxIncline": {@max: "@data.incline"},
  184.  
  185. "averageAcceleration": {@avg: "@delta.acceleration"},
  186. "minAcceleration": {@min: "@delta.acceleration"},
  187. "maxAcceleration": {@max: "@delta.acceleration"},
  188.  
  189. // "dtcCodes": {"@push": "@data.dtc_status"},
  190. "accountIdArray": {@addToSet: "@account"},
  191.  
  192. "vehicleArray": {@addToSet: "@data.vehicle"},
  193. "driverArray": {@addToSet: "@data.username"},
  194. "driverVehicleArray": {@addToSet: "@driverVehicleId"},
  195.  
  196. "count": {@sum: 1}
  197. }
  198. },
  199. {
  200. @sort: {
  201. "_id": 1 // Finally sort the data based on eventtime in ascending order
  202. }
  203. }
  204. ],
  205. {
  206. allowDiskUse: true
  207. }
  208. );
  209.  

For each driver (or rather driver-vehicle combination) that is identified, the first task is to figure out the last processing time for that driver and find all new records (lines 6 to 22). The next task of aggregating over 15 second windows is a MongoDB aggregation step starting from line 27. Aggregation tasks in MongoDB are described as pipeline where element element of the flow does a certain task and passes on the result to the next element in the pipe. The first task is to match all records within the time-span that we want to process (lines 29 to 36). Then we only need to consider (i.e. project) few fields that are of interest to us (lines 38 to 44). The element of the pipeline  '\(group') does the actual job of aggregation. The key to this aggregation step is the group-by Id that is created using a 'quarter' (line 55) which is nothing but a number between 0 and 3 created out of the second value of the time-stamp. This effectively creates the time windows needed for aggregation.

The actual aggregation steps are quite repetitive. See for example lines 61 to 63 where the average load, minimum load and maximum load is being calculated based on the aggregate over each time period. This is repeated for all the variables that we want to consider in the feature-set. Before storing it, the values are sorted based on event-time (lines 200 to 202).

Saving the feature-set in a collection

The features thus calculated are saved to a new collection on which I would apply a machine-learning algorithm to create a model. The collection is called 'vehicle_signature_records' - where the feature-set records can be saved as follows:

  1. var lastRecordedTimeForDriver = startTimeForDriver;
  2. var insertCounter = 0;
  3. allNewCarReadings.forEach(function (record) {
  4. var currentRecordEventTime = new Date(Date.UTC(record._id.year, record._id.month - 1, record._id.day, record._id.hour, record._id.minute, record._id.quarter * 15, 0));
  5. if (currentRecordEventTime >= lastRecordedTimeForDriver)
  6. lastRecordedTimeForDriver = new Date(Date.UTC(record._id.year, record._id.month - 1, record._id.day, record._id.hour, record._id.minute, 59, 999));
  7.  
  8. record['eventTime'] = currentRecordEventTime;
  9. record['eventId'] = record._id;
  10. delete record._id;
  11. record['accountId'] = record.accountIdArray[0];
  12. delete record.accountIdArray;
  13.  
  14. record['vehicle'] = record.vehicleArray[0];
  15. delete record.vehicleArray;
  16.  
  17. record['driver'] = record.driverArray[0];
  18. delete record.driverArray;
  19.  
  20. record['driverVehicle'] = record.driverVehicleArray[0];
  21. delete record.driverVehicleArray;
  22.  
  23. record.averageGPSLatitude = parseInt((record.averageGPSLatitude * 1000).toFixed(3)) / 1000;
  24. record.averageGPSLongitude = parseInt((record.averageGPSLongitude * 1000).toFixed(3)) / 1000;
  25.  
  26. db.getCollection('vehicle_signature_records').insert(record);
  27. insertCounter += 1;
  28. });

The code above inserts a few more variables to identify the driver, the vehicle and the driver-vehicle combination to the result sent by the aggregation function (lines 8 to 21) and saves it to the database (line 26). However lines 23 and 24 need an explanation since it signifies something very important and significant!

Coding the approximate location of the driver

One of the interesting observations I discovered while working on this problem is that one can dramatically improve accuracy of prediction if you can code the approximate location of the driver. Imagine working on this problem for millions of drivers who are scattered all across the country. One of the important facts to consider is that most drivers generally drive around a certain location most of the time. Thus if their location is somehow encoded into the model, the model can quickly converge based on their location. Lines 23 and 24 attempt to do just that. It encodes two numbers that represent the approximate latitude and longitude of the location. All these lines do is store the latitude and longitude with reduced accuracy.

Some more book-keeping

As a final step the final task is to store the book-keeping values.

  1. if (driverIsNew) { // which means this is a new device with no record
  2. db.book_keeping.update(
  3. {_id: 'driver_vehicles_processed_until'},
  4. {@push: {'lastEndTimes': {driver: driverName, endTime: lastRecordedTimeForDriver}}}
  5. );
  6. } else {
  7. var nowDate = new Date();
  8. db.book_keeping.update(
  9. {_id: 'driver_vehicles_processed_until', 'lastEndTimes.driver': driverName},
  10. {@set: {'lastEndTimes.@.endTime': lastRecordedTimeForDriver, 'lastEndTimes.@.driver': driverName}} // lastRecordedTimeForDriver
  11. );
  12. }
  13.  

After doing all this work (which by now you may be already exhausted after reading through), we are finally ready to apply some real machine-learning algorithms. Remember, I said before that 95% of the task of a data scientist is in preparing, collecting, consolidating and cleaning the data. You are seeing a live example of that!

In big companies there are people called data-engineers who would do part of this job, but not all people are fortunate enough to have data-engineers working for them. Besides, if you can do all this work, you are more indispensible to the company you work for - and so it makes sense to develop these skills along with your analysis skills as a data-scientist.

Building a Machine Learning Model

Fortunately, the data has been created in a clean way, so there is no further clean-up required on it. Our data is in a MongoDB collection called 'vehicle_signature_records'. If you are a pure Data Scientist the following should be very familar to you. The only difference between what I am going to do now and what you generally find in books and blogs, is the data-source. I am going to read my data-sets directly from the MongoDB database instead of from CSV files. After reading the above, by now you must have become partial experts at understanding MongoDB document structures. If not, don't worry since all the data that we stored in the collection are all flat - i.e. all values are present at the top level of each record. To illlustrate how the data looks, let me show you one record from the collection.

  1. {
  2. "_id" : ObjectId("5a3028db7984b918e715c2a7"),
  3. "averageGPSLatitude" : 37.386,
  4. "averageGPSLongitude" : -121.96,
  5. "averageLoad" : 24.80392156862745,
  6. "minLoad" : 0.0,
  7. "maxLoad" : 68.62745098039215,
  8. "averageThrottlePosB" : 29.11764705882353,
  9. "minThrottlePosB" : 14.901960784313726,
  10. "maxThrottlePosB" : 38.03921568627451,
  11. "averageRpm" : 1216.25,
  12. "minRpm" : 516.0,
  13. "maxRpm" : 1486.0,
  14. "averageThrottlePos" : 20.49019607843137,
  15. "minThrottlePos" : 11.764705882352942,
  16. "maxThrottlePos" : 36.86274509803921,
  17. "averageIntakeAirTemp" : 85.5,
  18. "minIntakeAirTemp" : 84.0,
  19. "maxIntakeAirTemp" : 86.0,
  20. "averageSpeed" : 13.517712865133625,
  21. "minSpeed" : 0.0,
  22. "maxSpeed" : 24.238657551274084,
  23. "averageAltitude" : -1.575,
  24. "minAltitude" : -1.9,
  25. "maxAltitude" : -1.2,
  26. "averageCommThrottleAc" : 25.392156862745097,
  27. "minCommThrottleAc" : 6.2745098039215685,
  28. "maxCommThrottleAc" : 38.431372549019606,
  29. "averageEngineTime" : 32.25,
  30. "minEngineTime" : 32.0,
  31. "maxEngineTime" : 33.0,
  32. "averageAbsLoad" : 40.3921568627451,
  33. "minAbsLoad" : 18.431372549019606,
  34. "maxAbsLoad" : 64.31372549019608,
  35. "averageGear" : 0.0,
  36. "minGear" : 0.0,
  37. "maxGear" : 0.0,
  38. "averageRelThrottlePos" : 19.019607843137255,
  39. "minRelThrottlePos" : 4.705882352941177,
  40. "maxRelThrottlePos" : 27.84313725490196,
  41. "averageAccPedalPosE" : 14.607843137254902,
  42. "minAccPedalPosE" : 9.411764705882353,
  43. "maxAccPedalPosE" : 19.215686274509803,
  44. "averageAccPedalPosD" : 30.19607843137255,
  45. "minAccPedalPosD" : 18.823529411764707,
  46. "maxAccPedalPosD" : 39.21568627450981,
  47. "averageGpsSpeed" : 6.720000000000001,
  48. "minGpsSpeed" : 0.0,
  49. "maxGpsSpeed" : 12.82,
  50. "averageShortTermFuelTrim2" : -0.5,
  51. "minShortTermFuelTrim2" : -1.0,
  52. "maxShortTermFuelTrim2" : 1.0,
  53. "averageO211" : 9698.5,
  54. "minO211" : 1191.0,
  55. "maxO211" : 27000.0,
  56. "averageO212" : 30349.0,
  57. "minO212" : 28299.0,
  58. "maxO212" : 32499.0,
  59. "averageShortTermFuelTrim1" : -0.25,
  60. "minShortTermFuelTrim1" : -2.0,
  61. "maxShortTermFuelTrim1" : 4.0,
  62. "averageMaf" : 2.4332170200000003,
  63. "minMaf" : 0.77513736,
  64. "maxMaf" : 7.0106280000000005,
  65. "averageTimingAdvance" : 28.0,
  66. "minTimingAdvance" : 16.5,
  67. "maxTimingAdvance" : 41.0,
  68. "averageClimb" : -0.025,
  69. "minClimb" : -0.2,
  70. "maxClimb" : 0.1,
  71. "averageFuelPressure" : null,
  72. "minFuelPressure" : null,
  73. "maxFuelPressure" : null,
  74. "averageTemp" : 199.0,
  75. "minTemp" : 199.0,
  76. "maxTemp" : 199.0,
  77. "averageAmbientAirTemp" : 77.75,
  78. "minAmbientAirTemp" : 77.0,
  79. "maxAmbientAirTemp" : 78.0,
  80. "averageManifoldPressure" : 415.4026475455047,
  81. "minManifoldPressure" : 248.2073910645339,
  82. "maxManifoldPressure" : 592.9398786541643,
  83. "averageLongTermFuelTrim1" : 3.25,
  84. "minLongTermFuelTrim1" : -1.0,
  85. "maxLongTermFuelTrim1" : 7.0,
  86. "averageLongTermFuelTrim2" : -23.5,
  87. "minLongTermFuelTrim2" : -100.0,
  88. "maxLongTermFuelTrim2" : 7.0,
  89. "averageGPSAcceleration" : 1.0196509034930195,
  90. "minGPSAcceleration" : 0.0,
  91. "maxGPSAcceleration" : 1.9128551867763974,
  92. "averageHeadingChange" : 0.006215710862578118,
  93. "minHeadingChange" : 0.0,
  94. "maxHeadingChange" : 0.013477895914941244,
  95. "averageIncline" : null,
  96. "minIncline" : null,
  97. "maxIncline" : null,
  98. "averageAcceleration" : 1.0196509034930195,
  99. "minAcceleration" : 0.0,
  100. "maxAcceleration" : 1.9128551867763974,
  101. "count" : 4.0,
  102. "eventTime" : ISODate("2017-07-18T18:11:30.000+0000"),
  103. "eventId" : {
  104. "year" : NumberInt(2017),
  105. "month" : NumberInt(7),
  106. "day" : NumberInt(18),
  107. "hour" : NumberInt(18),
  108. "minute" : NumberInt(11),
  109. "quarter" : NumberInt(2)
  110. },
  111. "accountId" : "17350",
  112. "vehicle" : "toyota-highlander-2005",
  113. "driver" : "anupam",
  114. "driverVehicle" : 12.0
  115. }

That's quite a number of values for analysis! Which is a good sign for us - more values gives us more options to play with it.

As you may have realized by now, I have come to the final stage of building the model which is a traditional machine-learning task that is usually done in Python or R. So the final piece will be written in Python. You will find the entire code at 'driver_signature_build_model_scikit.py' in the 'machinelearning' directory.

Feature selection and elimination

As is common in any data-science project, one must first take a look at the data and determine if any features need to be eliminated. If some features do not make sense for the model we are building then those features need to be dropped. One quick observation is that fuel pressure and incline has nothing to do with driver signatures. So I will eliminate those values from any further consideration.

Specifically for this problem, you need do something special, which is a bit unusual, but required in this scenario.

If you look at the features carefully you will notice that some features are driver characteristics while others are vehicle characteristics. Thus it is important to not mix up the two sets. I have used my judgement to separate out the features into two sets as follows.

  1. vehicle_features = [
  2. "averageLoad",
  3. "minLoad",
  4. "maxLoad",
  5. "averageRpm",
  6. "minRpm",
  7. "maxRpm",
  8. "averageEngineTime",
  9. "minEngineTime",
  10. "maxEngineTime",
  11. "averageAbsLoad",
  12. "minAbsLoad",
  13. "maxAbsLoad",
  14. "averageAccPedalPosE",
  15. "minAccPedalPosE",
  16. "maxAccPedalPosE",
  17. "averageAccPedalPosD",
  18. "minAccPedalPosD",
  19. "maxAccPedalPosD",
  20. "averageShortTermFuelTrim2",
  21. "minShortTermFuelTrim2",
  22. "maxShortTermFuelTrim2",
  23. "averageO211",
  24. "minO211",
  25. "maxO211",
  26. "averageO212",
  27. "minO212",
  28. "maxO212",
  29. "averageShortTermFuelTrim1",
  30. "minShortTermFuelTrim1",
  31. "maxShortTermFuelTrim1",
  32. "averageMaf",
  33. "minMaf",
  34. "maxMaf",
  35. "averageTimingAdvance",
  36. "minTimingAdvance",
  37. "maxTimingAdvance",
  38. "averageTemp",
  39. "minTemp",
  40. "maxTemp",
  41. "averageManifoldPressure",
  42. "minManifoldPressure",
  43. "maxManifoldPressure",
  44. "averageLongTermFuelTrim1",
  45. "minLongTermFuelTrim1",
  46. "maxLongTermFuelTrim1",
  47. "averageLongTermFuelTrim2",
  48. "minLongTermFuelTrim2",
  49. "maxLongTermFuelTrim2"
  50. ]
  51.  
  52. driver_features = [
  53. "averageGPSLatitude",
  54. "averageGPSLongitude",
  55. "averageThrottlePosB",
  56. "minThrottlePosB",
  57. "maxThrottlePosB",
  58. "averageThrottlePos",
  59. "minThrottlePos",
  60. "maxThrottlePos",
  61. "averageIntakeAirTemp",
  62. "minIntakeAirTemp",
  63. "maxIntakeAirTemp",
  64. "averageSpeed",
  65. "minSpeed",
  66. "maxSpeed",
  67. "averageAltitude",
  68. "minAltitude",
  69. "maxAltitude",
  70. "averageCommThrottleAc",
  71. "minCommThrottleAc",
  72. "maxCommThrottleAc",
  73. "averageGear",
  74. "minGear",
  75. "maxGear",
  76. "averageRelThrottlePos",
  77. "minRelThrottlePos",
  78. "maxRelThrottlePos",
  79. "averageGpsSpeed",
  80. "minGpsSpeed",
  81. "maxGpsSpeed",
  82. "averageClimb",
  83. "minClimb",
  84. "maxClimb",
  85. "averageAmbientAirTemp",
  86. "minAmbientAirTemp",
  87. "maxAmbientAirTemp",
  88. "averageGPSAcceleration",
  89. "minGPSAcceleration",
  90. "maxGPSAcceleration",
  91. "averageHeadingChange",
  92. "minHeadingChange",
  93. "maxHeadingChange",
  94. "averageAcceleration",
  95. "minAcceleration",
  96. "maxAcceleration"
  97. ]

Having done this, now we need to build two different models - one to predict the driver and another one to predict the vehicle. It will be an interesting exercise to see which of these two models have better accuracy.

Reading directly from database instead of CSV

For completeness sake let me first give you two utility functions that are used to pull data out of the MongoDB database.

  1. def _connect_mongo(host, port, username, password, db):
  2. """ A utility for making a connection to MongoDB """
  3. if username and password:
  4. mongo_uri = 'mongodb://%s:%s@%s:%s/%s' % (username, password, host, port, db)
  5. conn = MongoClient(mongo_uri)
  6. else:
  7. conn = MongoClient(host, port)
  8. return conn[db]
  9.  
  10. def read_mongo(db, collection, query={}, projection='', limit=1000, host='localhost', port=27017, username=None, password=None, no_id=False):
  11. """ Read from Mongo and Store into DataFrame """
  12. db = _connect_mongo(host=host, port=port, username=username, password=password, db=db)
  13. cursor = db[collection].find(query, projection).limit(limit)
  14. datalist = list(cursor)
  15. sanitized = json.loads(json_util.dumps(datalist))
  16. normalized = json_normalize(sanitized)
  17. df = pd.DataFrame(normalized)
  18.  
  19. return df

The function above is Pandas-friendly - it reads data from the MongoDB database and returns a Pandas data-frame so that you can get to work immediately with your machine-learning part.

In case you are not comfortable with MongoDB, I am giving you the entire dataset of the aggregated values in CSV format so that you can import it in any database you wish. The file is in GZIP format - so you need to unzip it before reading it. For those of you who are comfortable with MongoDB, here is the entire database dump.

Building a Machine Learning model

Now it is time to build the learning model. At program invocation two parameters are needed - the database host and which feature set to build the model for. This is handled in the code as follows:

  1. DATABASE_HOST = argv[0]
  2. CHOSEN_FEATURE_SET = argv[1]
  3.  
  4. readFromDatabase = True
  5. read_and_proceed = False

Then I have some logic for setting the appropriate feature set within the application.

  1. if (CHOSEN_FEATURE_SET == 'vehicle'):
  2. features = vehicle_features
  3. feature_name = 'vehicle'
  4. class_variables = ['vehicle'] # Declare the vehicle as a class variable
  5. elif (CHOSEN_FEATURE_SET == 'driver'):
  6. features = driver_features
  7. feature_name = 'driver'
  8. class_variables = ['driver'] # Declare the driver as a class variable
  9. else:
  10. features = all_features
  11. feature_name = 'driverVehicleId'
  12. class_variables = ['driverVehicleId'] # Declare the driver-vehicle combo as a class variable
  13.  
  14. if readFromDatabase:
  15. if CHOSEN_FEATURE_SET == 'driver': # Choose the records only for one vehicle which has multiple drivers
  16. df = read_mongo('obd2', 'vehicle_signature_records', {"vehicle": {"\)regex" : ".*gmc-denali.*"}, "eventTime": {"\(gte": startTime, "\)lte": endTime} }, {"_id": 0}, 1000000, DATABASE_HOST, 27017, None, None, True )
  17. else:
  18. df = read_mongo('obd2', 'vehicle_signature_records', {"eventTime": {"\(gte": startTime, "\)lte": endTime} }, {"_id": 0}, 1000000, DATABASE_HOST, 27017, None, None, True )

The following part is mostly boiler-plate code to break up the dataset into a training set, test set and validation set. While doing so all null values are set to zero as well.

  1. # First randomize the entire dataset
  2. df = df.sample(frac=1).reset_index(drop=True)
  3.  
  4. # Then choose only a small subset of the data, frac=1 means choose everything
  5. df = df.sample(frac=1, replace=True)
  6.  
  7. df.fillna(value=0, inplace=True)
  8.  
  9. train_df, test_df, validate_df = np.split(df, [int(.8*len(df)), int(.9*len(df))])
  10.  
  11. df[feature_name] = df[feature_name].astype('category')
  12.  
  13. y_train = train_df[class_variables]
  14. X_train = train_df.reindex(columns=features)
  15. X_train.replace('NODATA', 0, regex=False, inplace=True)
  16. X_train.fillna(value=0, inplace=True)
  17.  
  18. y_test = test_df[class_variables]
  19. X_test = test_df.reindex(columns=features)
  20. X_test.replace('NODATA', 0, regex=False, inplace=True)
  21. X_test.fillna(value=0, inplace=True)
  22.  
  23. y_validate = validate_df[class_variables]
  24. X_validate = validate_df.reindex(columns=features)
  25. X_test.replace('NODATA', 0, regex=False, inplace=True)
  26. X_validate.fillna(value=0, inplace=True)

Building a Random Forest Classifier and saving it

After trying out various different classifiers, with this dataset, it turns out that a Random Forest classifier gives the best accuracy. Here is the graph showing accuracy of the different classifiers used with this data set. The two best algorithms turn out to be Classification & Regression and Random Forest Classifier. I chose the Random Forest Classifier since this is an ensamble techique and will have better resilience.

Raspberry AlgorithmComparison

This is what you need to do to build a Random Forest classifier with this dataset.

  1. dt = RandomForestClassifier(n_estimators=20, min_samples_leaf=1, max_depth=20, min_samples_split=2, random_state=0)
  2. dt.fit(X_train, y_train.values.ravel())
  3.  
  4. joblib.dump(dt, model_file)
  5. print('...done. Your Random Forest classifier has been saved in file: ' + model_file)

After building the model, I am saving it in a file (line 4) so that it can be read easily when doing the prediction. To find out how well the model is doing, we have to use the test set to make a prediction and evaluate the model score.

  1. y_pred = dt.predict(X_test)
  2. y_test_as_matrix = y_test.as_matrix()
  3. print('Completed generating predicted set')
  4.  
  5. print ('Confusion Matrix')
  6. print(confusion_matrix(y_test, y_pred))
  7.  
  8. crossValScore = cross_val_score(dt, X_validate, y_validate)
  9. model_score = dt.score(X_test, y_test_as_matrix)
  10. print('Cross validation score = ' + crossValScore)
  11. print('Model score = ' + model_score)
  12. print ('Precision, Recall and FScore')
  13. precision, recall, fscore, _ = prf(y_test, y_pred, pos_label=1, average='micro')
  14. print('Precision: ' + str(precision))
  15. print('Recall:' + str(recall))
  16. print('FScore:' + str(fscore))

 Many kinds of evalution metrics are calculated and printed in the above code segment. The most important one that I tend to look at is the overall model score, but the others will give you a good idea of the bias and variance which indicates how resilient your model is with respect to changing values.

Measure of importance

One interesting analysis is to figure out which of the features is the most impactful on the result. This can be done using the simple code fragment below:

  1. importance_indices = {}
  2. for z in range(0, len(dt.feature_importances_)):
  3. importance_indices[z] = dt.feature_importances_[z]
  4.  
  5. sorted_importance_indices = sorted(importance_indices.items(), key=operator.itemgetter(1), reverse=True)
  6.  
  7. for k1 in sorted_importance_indices:
  8. print(features[int(k1[0])] + ' -> ' + str(k1[1]))

 Prediction results and Conclusion

After running the two cases, namely driver prediction and vehicle prediciton, I am typically getting the following scores.

Driver Prediction Using Raspberry Pi results

This is encouraging given that there was always an apprehension about the score not being accurate enough due to the low frequency of data collection. This is an important factor, since we are creating this model out of the instantaneous time derivatives of values, and a low sampling rate will introduce a significant error. The dataset has 13 different driver vehicle combinations. There isn't a whole lot of driving data other than the experiments that were done, but with an accuracy that is 95% or above, there may be some value in this approach.

Another interesting fact is that the vehicle prediction is coming out to be more accurate than the driver. In other words, the parameters being emitted by the car tend to characterize the car more heavily than the driver. Most drivers drive the same way, but the machine characteristics of the car tend to distinguish them more clearly.

Commercial Use Cases

I have showed you an example of many such applications that can be done with an approach like this. It just involves equipping your car with a smart device like a Raspberry Pi and the rest is all backend server-side work. Here are all the use-cases that I can think of. You can take up any of these as your own project and attempt to find a solution.

  1. Parking assistance
  2. Adaptive collision detection
  3. Video evidence recording
  4. Detect abusive driving
  5. Crash detection
  6. Theft detection
  7. Parking meter
  8. Mobile hot-spot
  9. Voice recognition
  10. Connect racing equipment
  11. Head Unit display
  12. Traffic sign warning
  13. Pattern of usage
  14. Reset fault codes
  15. Driver recognition (this is already demonstrated here!)
  16. Emergency braking alert
  17. Animal overheating protection
  18. Remote start
  19. Remote seatbelt notifications
  20. Radio volume regulation
  21. Auto radio off when window down
  22. Eco-driving optimization alerts
  23. Auto lock/unlock

Commercial product

After doing this experiment building a Raspberry Pi kit from scratch, I found out that there is a product called AutoPi that you can buy which will cut short a lot of the hardware setup. I have no affiliation with AutoPi, but I thought it is interesting that this subject is being treated quite seriously by some companies in Europe.

 

 

Published in Data Science

This is the second article of the series to determine driver signatures from OBD data using a Raspberry Pi. In the first article I had described in detail how to construct your Raspberry Pi. Now let us write some code to read data from your car and put it to the test. In this second article I will describe the software needed to read data from your car's CAN bus, including some data captured from the GPS antenna attached to your Raspberry Pi, combine it into one packet and send it over to the cloud. I will show you the software setup for capturing data on the client (the Raspberry Pi), store it locally, compress that data on a periodic basis, encrypt it and send it to a cloud server. I will also show you the server setup you need on the cloud to receive the data coming in from the Raspberry Pi, decrypt it and store it in a database or push it to a messaging queue for streaming purposes. All work will be done in Python. My database of choice is MongoDB for this project.

Before you read this article, I would encourage you to read the first article of this series so that you know what hardware setup you need to reproduce this yourself.

Capturing OBD data locally

To begin with let us first see how we can capture data on the Raspberry Pi and save it locally. Since this is the first task that needs to be accomplished, let us figure out a way to capture data constantly and save it somewhere. Our data transmittal task is actually achieved using two processes.

  1. Capture data constantly and keep saving it to a local database.
  2. Periodically (once a minute in our case) summarize the data collected since the last successful run, and send it over to the cloud database.

Since we are going to execute a lot of code, I am only going to illustrate the salient features of the solution. A lot of the simpler programming nuances are left for you to figure out by looking at the code.

Did I say looking at the code? Where is it? Well, the entire code-base for this problem is in Github at https://github.com/anupambagchi/driver-signature-raspberry-pi  You can clone this repository on your machine and go through the details. Note that I was successful in running this code only on a Raspberry Pi running Ubuntu Mate. I had some trouble installing the required module gps on a Mac, but it runs fine on a Raspberry Pi where it is supposed to run. Most of the modules required by the Python program can be obtained using the 'pip' command, e.g 'pip install crypto'. To get the gps module you need to do 'sudo apt-get install python-gps'.

Where to store the data on a Raspberry Pi?

Remember that the Raspberry Pi is a small device with small memory and possibly small disk space. You need to choose a database that is nimble but effective for this scenario. We do not need any multi-threading ability, nor do we need to store months worth of data. The database is mostly going to be used to collect transitional data that will shortly be compacted and sent over to the cloud database.

The universal database for this purpose is the in-built SQLite database that comes with every Linux installation. It is a file-based database - which means one has to specify a file when instantiating this database. Make a clone of the repository at the '/opt' directory on your Raspberry Pi.

You will find a file called /opt/driver-signature-raspberry-pi/create_table_statements.sql and two other files with the extension '.db' which are your database files for running the job.

To initialize the database, you will need to run some initialization script. This is a one-time process on your Raspberry Pi. The SQL statements to set up the database tables are as follows:

  1. CREATE TABLE CAR_READINGS(
  2. ID INTEGER PRIMARY KEY NOT NULL,
  3. EVENTTIME TEXT NOT NULL,
  4. DEVICEDATA BLOB NOT NULL
  5. );
  6.  
  7. CREATE TABLE LAST_PROCESSED(
  8. TABLE_NAME TEXT NOT NULL,
  9. LAST_PROCESSED_ID INTEGER NOT NULL
  10. );
  11.  
  12. CREATE TABLE PROCESSED_READINGS(
  13. ID INTEGER PRIMARY KEY NOT NULL,
  14. EVENTTIME TEXT NOT NULL,
  15. TRANSMITTED BOOLEAN DEFAULT FALSE,
  16. DEVICEDATA BLOB NOT NULL,
  17. ENCKEY BLOB NOT NULL,
  18. DATASIZE INTEGER NOT NULL
  19. );
  20.  
  21.  

To run it, you need to invoke the following:

  1. $ sqlite3 obd2data.db < create_table_statements.sql

This will create the necessary tables into the database file 'obd2data.db '.

Capturing OBD data

Now let us focus on capturing the OBD data. For this we make use of a popular Python library called pyobd which may be found at https://github.com/peterh/pyobd. There have been many forks of this library over the past 8 years or so. However my repository adds a lot to it - mainly for cloud processing and machine learning - so I decided not to call it a fork since the original purpose of the library has been altered a lot. I also modified the code to work well with Python 3.

The main program to read data from the OBD port and save it to a SQLite3 database may be found in 'obd_sqlite_recorder.py'. You can refer to this file under 'src' folder while you read the following.

To invoke this program you have to pass two parameters - the name of the user and a string representing the vehicle. For the latter I generally use a convention '<make>-<model>-<year>' for example 'gmc-denali-2015'. Let us now go through the salient features of the OBD scanner.

After doing some basic sanity tests, such as whether the program is running as superuser or not, and whether the appropriate number of parameters have been passed or not, the next step is to search the ports for GSM modem and initialize it.

  1. allRFCommDevicePorts = scanRadioComm()
  2. allUSBDevicePorts = scanUSBSerial()
  3. print("RFPorts detected with devices on them: " + str(allRFCommDevicePorts))
  4. print("USBPorts detected with devices on them: " + str(allUSBDevicePorts))
  5.  
  6. usbPortsIdentified = {}
  7.  
  8. iccid = '' # Default values are blank for those that come from GSM modem
  9. imei = ''
  10.  
  11. for usbPort in allUSBDevicePorts:
  12. try:
  13. with time_limit(4):
  14. print ("Trying to connect as GSM to " + str(usbPort))
  15. gsm = GsmModem(port=usbPort, logger=GsmModem.debug_logger).boot()
  16. print ("GSM modem detected at " + str(usbPort))
  17. allUSBDevicePorts.remove(usbPort) # We just found it engaged, don't use it again
  18. iccid = gsm.query("AT^ICCID?", "^ICCID:").strip('"')
  19. imei = gsm.query("ATI", "IMEI:")
  20. usbPortsIdentified[str(usbPort)] = "gsm"
  21. print(usbPort, usbPortsIdentified[usbPort])
  22. break # We got a port, so break out of loop
  23. except TimeoutException:
  24. # Maybe this is not the right port for the GSM modem, so skip to the next number
  25. print ("Timed out!")
  26. except IOError:
  27. print ("IOError - so " + usbPort + " is also not a GSM device")

 Once this is done, we need to clean up anything that is 15 days or older so that the database does not grow any bigger. The expectation is that that data is too old and should have been transmitted to the cloud long ago, so we should clean it up to keep the database healthy.

  1. # Open a SQLlite3 connection
  2. dbconnection = sqlite3.connect('/opt/driver-signature-raspberry-pi/database/obd2data.db')
  3. dbcursor = dbconnection.cursor()
  4.  
  5. # Do some cleanup as soon as you start. This is to prevent the database size from growing too big.
  6. localtime = datetime.now()
  7. delta = timedelta(days=15)
  8. fifteendaysago = localtime - delta
  9. fifteendaysago_str = fifteendaysago.isoformat()
  10. dbcursor.execute('DELETE FROM CAR_READINGS WHERE EVENTTIME < ?', (fifteendaysago_str,))
  11. dbconnection.commit()
  12. dbcursor.execute('VACUUM CAR_READINGS')
  13. dbconnection.commit()

Notice that we are opening up the database connection and executing a SQL statement to clean up and purge the data that is older than 15 days.

Next it is time to connect to the OBD port. Check if the connection can be established, and if not exit the program. Before you run this program, you need to use your Bluetooth settings on the desktop to connect to the ELM 327 device that should be alive and available for connection as soon as you turn the ignition switch on. This connection may be done manually by using the Linux Desktop UI or through a program that automatically does the connection as soon as the machine comes alive.

  1. gps_poller.start() # start it up
  2. logitems_full = ["dtc_status", "dtc_ff", "fuel_status", "load", "temp", "short_term_fuel_trim_1",
  3. "long_term_fuel_trim_1", "short_term_fuel_trim_2", "long_term_fuel_trim_2",
  4. "fuel_pressure", "manifold_pressure", "rpm", "speed", "timing_advance", "intake_air_temp",
  5. "maf", "throttle_pos", "secondary_air_status", "o211", "o212", "obd_standard",
  6. "o2_sensor_position_b", "aux_input", "engine_time", "abs_load", "rel_throttle_pos",
  7. "ambient_air_temp", "abs_throttle_pos_b", "acc_pedal_pos_d", "acc_pedal_pos_e",
  8. "comm_throttle_ac", "rel_acc_pedal_pos", "eng_fuel_rate", "drv_demand_eng_torq",
  9. "act_eng_torq", "eng_ref_torq"]
  10.  
  11. # Initialize the OBD recorder
  12. obd_recorder = OBD_Recorder(logitems_full)
  13. need_to_exit = False
  14. try:
  15. obd_recorder.connect(allRFCommDevicePorts + allUSBDevicePorts)
  16. except:
  17. exc_type, exc_value, exc_traceback = sys.exc_info()
  18. traceback.print_tb(exc_traceback, limit=1, file=sys.stdout)
  19. print ("Unable to connect to OBD port. Exiting...")
  20. need_to_exit = True
  21.  
  22. if not obd_recorder.is_connected():
  23. print ("OBD device is not connected. Exiting.")
  24. need_to_exit = True
  25.  
  26. if need_to_exit:
  27. os._exit(-1)

Notice that we first start the GPS poller. Then attempt to connect to the OBD recorder, and exit the program if unsuccessful.

Now that all connections have been checked, it is time to do the actual job of recording the readings.

  1. # Everything looks good - so start recording
  2. print ("Database logging started...")
  3. print ("Ids of records inserted will be printed on screen.")
  4.  
  5. lastminute = -1
  6. need_to_exit = False
  7. while True:
  8. # It may take a second or two to get good data
  9. # print gpsd.fix.latitude,', ',gpsd.fix.longitude,' Time: ',gpsd.utc
  10. if need_to_exit:
  11. os._exit(-1)
  12.  
  13. if (obd_recorder.port is None):
  14. print("Your OBD port has not been set correctly, found None.")
  15. sys.exit(-1)
  16.  
  17. localtime = datetime.now()
  18. results = obd_recorder.get_obd_data()
  19.  
  20. currentminute = localtime.minute
  21. if currentminute != lastminute:
  22. dtc_codes = obd_recorder.get_dtc_codes()
  23. print ('DTC=', str(dtc_codes))
  24. results["dtc_code"] = dtc_codes
  25. lastminute = currentminute
  26.  
  27. results["username"] = username
  28. results["vehicle"] = vehicle
  29. results["eventtime"] = datetime.utcnow().isoformat()
  30. results["iccid"] = iccid
  31. results["imei"] = imei
  32.  
  33. loc = {}
  34. loc["type"] = "Point"
  35. loc["coordinates"] = [gpsd.fix.longitude, gpsd.fix.latitude]
  36. results["location"] = loc
  37. results["heading"] = gpsd.fix.track
  38. results["altitude"] = gpsd.fix.altitude
  39. results["climb"] = gpsd.fix.climb
  40. results["gps_speed"] = gpsd.fix.speed
  41. results["heading"] = gpsd.fix.track
  42.  
  43. results_str = json.dumps(results)
  44. # print(results_str)
  45.  
  46. # Insert a row of data
  47. dbcursor.execute('INSERT INTO CAR_READINGS(EVENTTIME, DEVICEDATA) VALUES (?,?)',
  48. (results["eventtime"], results_str))
  49.  
  50. # Save (commit) the changes
  51. dbconnection.commit()
  52.  
  53. post_id = dbcursor.lastrowid
  54. print(post_id)
  55.  
  56. except (KeyboardInterrupt, SystemExit, SyntaxError): # when you press ctrl+c
  57. print ("Manual intervention Killing Thread..." + sys.exc_info()[0])
  58. need_to_exit = True
  59. except serial.serialutil.SerialException:
  60. print("Serial connection error detected - OBD device may not be communicating.
  61. Exiting." + sys.exc_info()[0])
  62. need_to_exit = True
  63. except IOError:
  64. print("Input/Output error detected. Exiting." + sys.exc_info()[0])
  65. need_to_exit = True
  66. except:
  67. print("Unexpected exception encountered. Exiting." + sys.exc_info()[0])
  68. need_to_exit = True
  69. finally:
  70. exc_type, exc_value, exc_traceback = sys.exc_info()
  71. traceback.print_tb(exc_traceback, limit=1, file=sys.stdout)
  72. print(sys.exc_info()[1])
  73. gps_poller.running = False
  74. gps_poller.join() # wait for the thread to finish what it's doing
  75. dbconnection.close()
  76. print ("Done.\nExiting.")
  77. sys.exit(0)
  78.  

 A few lines of this code need explanation. The readings are stored in the variable 'results'. This is a dictionary that is first populated through a call to obd_recorder.get_obd_data() [Line 18]. This loops through all the required variables that we need to measure and goes through a loop to measure the values. This dictionary is then augmented with the DTC codes, if any codes are found [Line 22]. DTC stands for Diagnostic Troubleshooting Code and are codes set by the manufacturer to represent some error conditions inside the vehicle or engine. In lines 27-31, the results dictionary is augmented with the username, vehicle and mobile SIM card parameters. Finally in lines 34-41 we add the GPS readings.

So you see that each reading contains information from various sources - the CAN bus, SIM card, user-provided data and GPS signals.

When all data is gathered in the record, we save it in the database (Line 47-48) and commit the changes.

Uploading the data to the cloud

Note that all the data that has been saved so far has not left the machine - it is stored locally inside the machine. Now it is time to work on a mechanism to send it over to the cloud. This data must be

  1. summarized
  2. compressed
  3. encrypted

before we can upload it to our server. On the server side, that same record needs to be decrypted, uncompressed and then stored in a more persistent storage where one can do some BigData analysis. At the same time it needs to be streamed to a messaging queue to make it available for stream processing - mainly for alerting purposes.

Stability and Thread-safety

The driver for uploading data to the cloud is a cronjob that runs every minute. We could also write a program with an internal timer that runs like a daemon, but after a lot of experimentation - specially with large data-sets, I have realized that running an internal timer leads to instability over the long run. When a program runs for ever, it may build up some garbage in the heap over time and ultimately freezes. When a program is invoked through a cronjob, it wakes up, runs, does its job for that moment and exits. That way it always stays out of the way of the data collection program and keeps the machine healthy.

On the same lines, I also need to mention something about thread-safety pertaining to SQLite3. The new task that I am about to attempt is summarization of the data collected by the recorder. So I can technically use the same database that runs from this single file called obd2data.db - right? Not so fast. Because the recorder runs in an infinite loop and constantly writes data to this database, if you attempt to write another table to this same database, it runs into thread-safety issues and the table gets corrupted. I tried this initially, then realized that this was not a stable architecture when I saw it frozen or found data-corruption. So I had to alter it to write the summary to a different database - leaving the raw data database in read-only mode.

Data Compactor and Transmitter

To accomplish the task of transmitting the summarized data to the cloud, let us write a class that fulfils this task. You will find this is the file obd_transmitter.py.

The main loop that does the task is as follows:

  1. DataCompactor.collect()
  2.  
  3. # We do not want the server to be pounded with requests all at the same time
  4. # So we have a random wait time to distribute it over the next 30 seconds.
  5. # This brings the max wait time per minute to be 40 seconds, which is still 20 seconds to do the job (summarize + transmit).
  6. waitminutes = randint(0, 30)
  7. if with_wait_time:
  8. time.sleep(waitminutes)
  9. DataCompactor.transmit()
  10. DataCompactor.cleanup()

There are three tasks - collect, transmit and cleanup. Let us take a look at each of these individually.

Collect and summarize

Driver Prediction Using Raspberry Pi 2 Architecture

The following code will create packets of data for each minute, encrypt it, compress it and then transmit it. There are finer details in each of these steps that I am going to explain. But let's look at the code first.

  1. # First find out the id of the record that was included in the last compaction task
  2. dbconnection = sqlite3.connect('/opt/driver-signature-raspberry-pi/database/obd2data.db')
  3. dbcursor = dbconnection.cursor()
  4. last_id_found = dbcursor.execute('SELECT LAST_PROCESSED_ID FROM LAST_PROCESSED WHERE TABLE_NAME = "CAR_READINGS" LIMIT 1')
  5.  
  6. lastId = 0
  7. try:
  8. first_row = next(last_id_found)
  9. for row in chain((first_row,), last_id_found):
  10. pass # do something
  11. lastId = row[0]
  12. except StopIteration as e:
  13. pass # 0 results
  14.  
  15. # Collect data till the last minute last second, but not including the current minute
  16. nowTime = datetime.utcnow().isoformat() # Example: 2017-05-14T19:51:29.071710 in ISO 8601 extended format
  17. # nowTime = '2017-05-14T19:54:58.398073' # for testing
  18. timeTillLastMinuteStr = nowTime[:17] + "00.000000"
  19. # timeTillLastMinute = dateutil.parser.parse(timeTillLastMinuteStr) # ISO 8601 extended format
  20.  
  21. dbcursor.execute('SELECT * FROM CAR_READINGS WHERE ID > ? AND EVENTTIME <= ?', (lastId,timeTillLastMinuteStr))
  22.  
  23. allRecords = []
  24. finalId = lastId
  25. for row in dbcursor:
  26. record = row[2]
  27. allRecords.append(json.loads(record))
  28. finalId = row[0]
  29.  
  30. if lastId == 0:
  31. # print("Inserting")
  32. dbcursor.execute('INSERT INTO LAST_PROCESSED (TABLE_NAME, LAST_PROCESSED_ID) VALUES (?,?)', ("CAR_READINGS", finalId))
  33. else:
  34. # print("Updating")
  35. dbcursor.execute('UPDATE LAST_PROCESSED SET LAST_PROCESSED_ID = ? WHERE TABLE_NAME = "CAR_READINGS"', (finalId,))
  36.  
  37. #print allRecords
  38. dbconnection.commit() # Save (commit) the changes
  39. dbconnection.close() # And close it before exiting
  40. print("Collecting all records till %s comprising IDs from %d to %d ..." % (timeTillLastMinuteStr, lastId, finalId))
  41.  
  42. encryptionKeyHandle = open('encryption.key', 'r')
  43. encryptionKey = RSA.importKey(encryptionKeyHandle.read())
  44. encryptionKeyHandle.close()
  45.  
  46. # From here we need to break down the data into chunks of each minute and store one record for each minute
  47. minutePackets = {}
  48. for record in allRecords:
  49. eventTimeByMinute = record["eventtime"][:17] + "00.000000"
  50. if eventTimeByMinute in minutePackets:
  51. minutePackets[eventTimeByMinute].append(record)
  52. else:
  53. minutePackets[eventTimeByMinute] = [record]
  54.  
  55. # print (minutePackets)
  56. summarizationItems = ['load', 'rpm', 'timing_advance', 'speed', 'altitude', 'gear', 'intake_air_temp',
  57. 'gps_speed', 'short_term_fuel_trim_2', 'o212', 'short_term_fuel_trim_1', 'maf',
  58. 'throttle_pos', 'climb', 'temp', 'long_term_fuel_trim_1', 'heading', 'long_term_fuel_trim_2']
  59.  
  60. dbconnection = sqlite3.connect('/opt/driver-signature-raspberry-pi/database/obd2summarydata.db')
  61. dbcursor = dbconnection.cursor()
  62. for minuteStamp in minutePackets:
  63. minutePack = minutePackets[minuteStamp]
  64. packet = {}
  65. packet["timestamp"] = minuteStamp
  66. packet["data"] = minutePack
  67. packet["summary"] = DataCompactor.summarize(minutePack, summarizationItems)
  68.  
  69. packetStr = json.dumps(packet)
  70.  
  71. # Create an AES encryptor
  72. aesCipherForEncryption = AESCipher()
  73. symmetricKey = Random.get_random_bytes(32) # generate a random key
  74. aesCipherForEncryption.setKey(symmetricKey) # and set it within the encryptor
  75. encryptedPacketStr = aesCipherForEncryption.encrypt(packetStr)
  76.  
  77. # Compress the packet
  78. compressedPacket = base64.b64encode(zlib.compress(encryptedPacketStr)) # Can be transmitted
  79. dataSize = len(packetStr)
  80.  
  81. # Now do asymmetric encryption of the key using PKS1_OAEP
  82. pks1OAEPForEncryption = PKS1_OAEPCipher()
  83. pks1OAEPForEncryption.readEncryptionKey('encryption.key')
  84. symmetricKeyEncrypted = base64.b64encode(pks1OAEPForEncryption.encrypt(symmetricKey)) # Can be transmitted
  85.  
  86. dbcursor.execute('INSERT INTO PROCESSED_READINGS(EVENTTIME, DEVICEDATA, ENCKEY, DATASIZE) VALUES (?,?,?,?)',
  87. (minuteStamp, compressedPacket, symmetricKeyEncrypted, dataSize))
  88.  
  89. # Save this list to another table
  90. dbconnection.commit() # Save (commit) the changes
  91. dbconnection.close() # And close it before exiting

To do some book-keeping (lines 2 to 13), I am keeping the last-processed Id in a separate table. Every time I successfully process a bunch of records, I save the last-processed Id in this table to pick up from during the next run. Remember, this is program is being triggered from a cronjob that runs every minute. You will find the cron description in the file crontab.txt under scripts directory.

Then we collect all the new records (lines 15 to 40) from the CAR_READINGS table and collect it in an array allRecords where each item is a rich document extracted from the JSON payload. One important point to note is that we do not include the current minute - since it may be incomplete. In lines 42 to 56 we are attempting to find out how many minutes have elapsed since the last time it was summarized and then pick up only those whole minutes which remain to be summarized and sent over. In Line 60 we are opening up a connection to a new database (stored in a different file - obd2summarydata.db) to store the summary data.

Lines 62 to 86 does the task of actually creating the summarized packet. Each packet has three fields - the time stamp (only minute, no seconds), the packet of all data collected during the minute, and the summary data (i.e aggregates over the minute). First this packet is created using a summarize function that I will describe later. Then this packet is encrypted using a randomly generated encryption key (Line 73) using AES encryption. Since the data packet size is non-uniform, we encrypt the packet using a randomly-generated key and then send the key over to the server in encrypted form to decrypt the packet. The encrypted packet is compressed (Line 78) to prepare it for transmission. The last step is to encrypt the transmission key itself so that it can also be sent over to the server in the same payload. We use PKS1 OAEP Encryption for this using a public key (encryption.key) stored on the server. The eventtime (whole minute), compressed/encrypted packet, encrypted key and the datasize is saved as a record in the table PROCESSED_READINGS (Line 86).

Note that when the packet is created you have a choice to only send the summarized data or the entire raw records along with the summarized data. It is obvious that if you want to save bandwidth you would do most of the "edge-processing" work in the Raspberry Pi itself and only send the summary record each time. However, in this experiment I wanted to do some additional work on the cloud - which was more granular than the once-a-minute scenario. As shown in part 3 of this series of articles, I actually do the summarization once every 15 seconds for driver signature analysis. So I needed to send all the raw data as well as the summary in my packet - there by increasing the bandwidth requirements. However the compression of data helped a lot is reducing the size of the original packet by almost 90%.

Data Aggregation

Let me now describe how the summarization is done. This is the "edge-computing" part of the entire process that is difficult to do within generic devices. Any IoT device (CalAmp for example) will be able to do most of the work pertaining to capturing OBD data and transmiting it to the cloud. But those devices perhaps are not capable enough to do the summarization - which is why one needs a more powerful computing machine like a Raspberry Pi to do the job. All I do for summarization is the following:

  1. summary = {}
  2. for item in items:
  3. summaryItem = {}
  4. itemarray = []
  5. for reading in readings:
  6. if isinstance(reading[item], (float, int)):
  7. itemarray.append(reading[item])
  8. # print(itemarray)
  9. summaryItem["count"] = len(itemarray)
  10. if len(itemarray) > 0:
  11. summaryItem["mean"] = numpy.mean(itemarray)
  12. summaryItem["median"] = numpy.median(itemarray)
  13. summaryItem["mode"] = stats.mode(itemarray)[0][0]
  14. summaryItem["stdev"] = numpy.std(itemarray)
  15. summaryItem["variance"] = numpy.var(itemarray)
  16. summaryItem["max"] = numpy.max(itemarray)
  17. summaryItem["min"] = numpy.min(itemarray)
  18.  
  19. summary[item] = summaryItem
  20.  
  21. return summary

Look at line 56 of the previous block of code. You will see an array of items describing all the items that we need to summarize. This is in the variable summarizationItems. For each item in this list, we need to find the mean, median, mode, standard deviation, variance, maximum and minimum during each minute (Lines 11 to 17). The summarized items are appended to each record before it is saved to the summary database.

Transmitting the data to the cloud

To transmit the data over to the cloud you need to first set up an end-point. I am going to show you later how you can do that on the server. For now, let us assume that you already have that available. Then from the client side you can do the following to transmit the data:

  1. base_url = "http://OBD-EDGE-DATA-CATCHER-43340034802.us-west-2.elb.amazonaws.com" # for accessing it from outside the firewall
  2.  
  3. url = base_url + "/obd2/api/v1/17350/upload"
  4.  
  5. dbconnection = sqlite3.connect('/opt/driver-signature-raspberry-pi/database/obd2summarydata.db')
  6. dbcursor = dbconnection.cursor()
  7. dbupdatecursor = dbconnection.cursor()
  8.  
  9. dbcursor.execute('SELECT ID, EVENTTIME, TRANSMITTED, DEVICEDATA, ENCKEY, DATASIZE FROM PROCESSED_READINGS WHERE TRANSMITTED="FALSE" ORDER BY EVENTTIME')
  10. for row in dbcursor:
  11. rowid = row[0]
  12. eventtime = row[1]
  13. devicedata = row[3]
  14. enckey = row[4]
  15. datasize = row[5]
  16.  
  17. payload = {'size': str(datasize), 'key': enckey, 'data': devicedata, 'eventtime': eventtime}
  18. response = requests.post(url, json=payload)
  19.  
  20. #print(response.text) # TEXT/HTML
  21. #print(response.status_code, response.reason) # HTTP
  22.  
  23. if response.status_code == 201:
  24. dbupdatecursor.execute('UPDATE PROCESSED_READINGS SET TRANSMITTED="TRUE" WHERE ID = ?', (rowid,))
  25. dbconnection.commit() # Save (commit) the changes
  26.  
  27. dbconnection.commit() # Save (commit) the changes
  28. dbconnection.close() # And close it before exiting

The end-point (that I am going to show you later) will accept POST requests. But you also need to configure a load-balancer that just allows a connection from the outside world to inside the firewall. You must establish adequate security measures to ensure that your tunnel only exposes a certain port on the internal server.

Lines 1 to 7 set up the database connections to the summary database. In the table I am storing a flag "TRANSMITTED" that indicates if the record has been transmitted or not. For all records that have not been transmitted (Line 9) I am creating a payload comprising of size of packet, the encrypted key to use for decrypting the packet, the compressed/encrypted data packet and the eventtime (Line 17). Then this payload is POSTed to the end-point (Line 18). If the transmission is successful, the flag TRANSMITTED is set to true for this packet so that we do not attempt to send this again.

Cleanup

The cleanup operation is pretty simple. All I do is delete all records from the summary table that are more than 15 days old.

  1. localtime = datetime.now()
  2. if int(localtime.isoformat()[14:16]) == 0:
  3. delta = timedelta(days=15)
  4. fifteendaysago = localtime - delta
  5. fifteendaysago_str = fifteendaysago.isoformat()
  6. dbconnection = sqlite3.connect('/opt/driver-signature-raspberry-pi/database/obd2summarydata.db')
  7. dbcursor = dbconnection.cursor()
  8. dbcursor.execute('DELETE FROM PROCESSED_READINGS WHERE EVENTTIME < ?', (fifteendaysago_str,))
  9. dbconnection.commit()
  10. dbcursor.execute('VACUUM PROCESSED_READINGS')
  11.  
  12. dbconnection.commit() # Save (commit) the changes
  13. dbconnection.close() # And close it before exiting

 

Server on the Cloud

As a final piece to this article let me describe how to set up the end-point of the server. There are many items that are needed to put it together. Surprisingly all of this is achieved in a relatively small amount of code - thanks to the crispness of the Python language.

  1. print(request.content_type)
  2. if not request.json or not 'size' in request.json:
  3. raise InvalidUsage('Invalid usage of this web-service detected', status_code=400)
  4.  
  5. size = int(request.json['size'])
  6. decoded_compressed_record = request.json.get('data', "")
  7. symmetricKeyEncrypted = request.json.get('key', "")
  8.  
  9. compressed_record = base64.b64decode(decoded_compressed_record)
  10. encrypted_json_record_str = zlib.decompress(compressed_record)
  11.  
  12. pks1OAEPForDecryption = PKS1_OAEPCipher()
  13. pks1OAEPForDecryption.readDecryptionKey('decryption.key')
  14. symmetricKeyDecrypted = pks1OAEPForDecryption.decrypt(base64.b64decode(symmetricKeyEncrypted))
  15.  
  16. aesCipherForDecryption = AESCipher()
  17. aesCipherForDecryption.setKey(symmetricKeyDecrypted)
  18.  
  19. json_record_str = aesCipherForDecryption.decrypt(encrypted_json_record_str)
  20.  
  21. record_as_dict = json.loads(json_record_str)
  22.  
  23. # Add the account ID to the reading here
  24. record_as_dict["account"] = account
  25.  
  26. #print record_as_dict
  27. post_id = mongo_collection.insert_one(record_as_dict).inserted_id
  28. print('Saved as Id: %s' % post_id)
  29.  
  30. producer = KafkaProducer(bootstrap_servers=['your.kafka.server.com:9092'],
  31. value_serializer=lambda m: json.dumps(m).encode('ascii'),
  32. retries=5)
  33. # send the individual records to the Kafka queue for stream processing
  34. raw_readings = record_as_dict["data"]
  35. counter = 0
  36. for raw_reading in raw_readings:
  37. raw_reading["id"] = str(post_id) + str(counter)
  38. raw_reading["account"] = account
  39. producer.send("car_readings", raw_reading)
  40. counter += 1
  41.  
  42. producer.flush()
  43. # send the summary to the Kafka queue in case there is some stream processing required for that as well
  44. raw_summary = record_as_dict["summary"]
  45. raw_summary["id"] = str(post_id)
  46. raw_summary["account"] = account
  47. raw_summary["eventTime"] = record_as_dict["timestamp"]
  48. producer.send("car_summaries", raw_summary)
  49.  
  50. producer.flush()
  51. return jsonify({'title': str(size) + ' bytes received'}), 201

I decided to use MongoDB as persistent storage for records and Kafka as the messaging server for streaming. The following tasks are done in order in this function:

  1. Check for invalid usage of this web-service, and raise an exception if illegal (Line 1 to 3). A simple test is done to check for the existence of 'size' in the payload to ensure this.
  2. Decompress the packet (Line 9 to 10)
  3. Decrypt the transmission key using the private key (decryption.key) stored on the server. (Line 12 to 14)
  4. Decrypt the data packet (Line 19)
  5. Convert the JSON record to an internal Python dictionary for digging deeper into it (Line 21)
  6. Save the record in MongoDB (Line 27)
  7. Push the same record into a Kafka messaging queue (Lines 27 to 50)

This functionality is exposed as web-service using a Flask server. You will find the rest of the server code in file flaskserver.py in folder 'server'.

I have covered the salient features to put this together, skipping the other obvious pieces which you can peruse yourself by cloning the entire repository.

Conclusion

I know this has been a long post, but I needed to cover a lot of things. And we have not even started working on the data-science part. You may have heard that a data scientist spends 90% of the time in preparing data. Well, this task is even bigger - we had to set up the hardware and software to generate raw real-time data and store it in real-time to even start thinking about data science. If you are curious to see a sample of the collected data, you can find it here.

But now that this work is done, and we have taken special care that the generated data is in a nicely formatted form, the rest of the task should be easier. You will find the data science related stuff in the third and final episode of this series.

Go to Part 3 of this series

Published in Data Science