Another Method to Eliminate Suspend Data: SCORM 2004

As a follow-up to last post, the customer noticed that the user was not getting a bookmark back to the scoring page during the next section. I had used SCORM Watch for testing and saw the bookmark (cmi.location) being sent by ToolBook. But then I tested in Tracker.Net and saw that the customer was right. This was because the SCORM 2004 standard says that once a lesson (SCO) has been marked as complete, the user basically has to start over if they return to the lesson. That means that no bookmark, status, or suspend data is sent the next session. So a simpler solution to preventing the extra suspend data is to stick with SCORM 2004 (this doesn’t work in SCORM 1.2) and mark the book as completed if the user passes and otherwise discard results. This is shown below.

This is somewhat less efficient than the last post in that the huge suspend data is sent to the LMS and stored. But it is not sent BACK to the content and, most importantly, ToolBook doesn’t need to spend the time and processing power to reset all the questions to their original state.


Eliminating Unwanted Suspend Data in ToolBook

I was working with a Tracker.Net customer recently. As part of testing, I noticed that their ToolBook lessons were sending lots of suspend data and interaction data. I helped them rework their logic and wanted to document that in case it would be helpful to other ToolBook developers. There were several issues.

  1. The book was sending a complete test score and interaction data as soon as a user relaunched the lesson. That was a combination of the automatic bookmarking of ToolBook and SCORM and the “load page” action. The load page action looked this:
    on load page
        trigger Score Quiz "Score Quiz"

    This made since in that the developer wanted to score the entire book (35 questions) when the user reached the score page. However, since the user then exited from this page, the first thing that happened was that the whole book was scored again. You would think that the user would have a zero score the second time, but that brings us to item 2:

  2. ToolBook’s suspend data includes all the question responses. This can be quite extensive and can go over the 4096 bytes that SCORM 1.2 allows. This developer was using SCORM 2004, so that wasn’t a limit. But it was still quite a bit of data to send back. More importantly, the user had to wait while all the questions were put back to their original state. Putting this together with #1, the user ended up with the same score (and 35 questions worth of interaction data) being immediately sent back to the LMS when they reopened the lesson.

Here is what the suspend data looked like:

global _AXF_YEAR=0 SEQUENCE=["p0","p92","p121","p122","p123","p124","p125","p126","p128","p127","p129","p131","p133","p134","p135","p136","p137","p138","p139","p140","p141","p142","p144","p145","p146","p147","p148","p149","p150","p151","p152","p143","p153","p154","p155","p120"] RAWSCORE=15 qn P92.O3=1~0~a~1~0~.022~b~0~0~.012~c~0~0~.017~d~0~1 P121.O3=1~yes~a~1~0~no~b~0~1 P122.O3=1~1~a~1~1~2~b~0~0 P123.O3=1~1~a~1~0~2~b~0~1 P124.O3=1~1~a~1~1~2~b~0~0 P125.O3=1~.012~d~0~0~.006~b~0~0~.010~c~1~0~0~a~0~1 P126.O3=1~Least Material Condition~c~0~0~Maximum Material Condition~a~0~0~Regardless of Feature Size~b~1~1 P128.O3=1~A~a~0~0~B~b~1~1~C~c~0~0 P127.O3=1~A~a~0~0~B~b~0~0~C~c~1~1~D~d~0~0 P129.O9=1~A~a~0~1~B~b~0~0~C~c~1~0~D~d~0~0 P131.O9=1~A~a~0~0~B~b~0~1~C~c~1~0~D~d~0~0 P133.O3=1~A~a~0~1~B~b~1~0~C~c~0~0 P134.O3=1~.742~b~0~0~.766~a~0~0~.758~c~1~0~.734~d~0~1 P135.O3=1~.367~c~0~0~.395~a~0~0~.355~d~1~1~.383~b~0~0 P136.O3=1~LMC Pin~c~0~0~Ø.375 Pin~b~1~0~Expanding Pin~a~0~1 P137.O3=1~A~a~0~1~B~b~0~0~C~c~1~0~D~d~0~0 P138.O3=1~.320~d~0~0~.312~c~0~0~.308~a~1~1~.316~b~0~0 P139.O3=1~.308~a~0~1~.312~c~0~0~.316~b~1~0~.320~d~0~0 P140.O3=1~.574~c~0~0~.590~d~0~0~.548~a~1~1~.556~b~0~0 P141.O3=1~.004~d~0~0~.008~b~0~1~.001~a~1~0~.000~c~0~0 P142.O3=1~Yes~a~0~0~No~b~1~1 P144.O3=1~Yes~a~1~0~No~b~0~1 P145.O3=1~Yes~a~1~0~No~b~0~1 P146.O3=1~Yes~a~1~1~No~b~0~0 P147.O3=1~Two parallel planes~a~0~1~Two parallel lines~b~1~0 P148.O3=1~Derived Median Line~b~0~1~Surface~a~1~0 P149.O3=1~Yes~a~0~0~No~b~1~1 P150.O3=1~Yes~a~0~0~No~b~1~1 P151.O3=1~Yes~a~0~1~No~b~1~0 P152.O3=1~Yes~a~0~0~No~b~1~1 P143.O3=1~1.940~a~0~1~2.000~b~0~0~2.060~d~1~0~1.880~c~0~0 P153.O3=1~.028~c~0~0~.020~b~0~0~.004~d~1~1~.016~a~0~0 P154.O3=1~Entire surfaces~b~0~0~Individual slices~a~1~1 P155.O3=1~1.500~a~0~0~1.505~d~0~0~1.520~c~1~0~1.510~b~0~1

Quite a big chunk. Here is what the interaction data looked like for just one question:

paramName = paramValue = Multiple_Choice__P155_3_ 40.949:
paramName = cmi.interactions.33.timestamp. paramValue = 2012-02-04T08:24:17 40.987:
paramName = cmi.interactions.33.type. paramValue = choice 41.23:
paramName = cmi.interactions.33.latency. paramValue = PT3.70S 41.51:
paramName = cmi.interactions.33.correct_responses.0.pattern. paramValue = 1.510 41.86:
paramName = cmi.interactions.33.result. paramValue = 0 41.122:
paramName = cmi.interactions.33.weighting. paramValue = 1 41.160:
paramName = cmi.interactions.33.learner_response. paramValue = 1.520 41.198:
paramName = cmi.interactions.33.description. paramValue = What is the maximum outside diameter of the sleeve?

So we have lots of unwanted data going to/from the LMS and duplicate quiz scores and question data showing up in reports and so forth. So what do we do?

  1. Only score the test if the user started at the beginning and didn’t just arrive on the scoring page via a bookmark. We do this with a global variable as shown below.

    Notice that we initialize the value of the hasVisitedPage1 variable to false. We then set the value to true on the load page of page 1 of our book.

    When the user gets to the scoring page, we check this same hasVisitedPage1 variable to determine whether to score the book. If not, we ask if the user wants to start the quiz. If so, we navigate them to page 1. If not, we exit the book and discard results (which eliminates suspend data). Here is the screen capture:
  2. The next step is to prevent all that suspend data from going to the LMS in the first place. The lesson has a “record score and exit” button. My first approach was to reset the book before marking the book as complete. This eliminates the suspend data but makes the score 0. Here is a screen capture, but I don’t recommend this approach since having a zero score doesn’t make sense.
  3. There is a bit of a “Catch 22” here as we don’t want suspend data but need the completion status and the score. I decided to go ahead and make my own SCORM calls. So we send the data that we need manually and THEN discard results. This bumps us into another challenge in ToolBook: coming up with the percentage score. While you can programmatically score the book with the Actions Editor, the resulting score is a raw number. There is no programmatic way to get the maximum score so that you can calculate the percentage. We could set a global variable with this raw score, but this is asking for maintenance problems down the road. Instead, we use the fact that the score button WILL display the percentage score in a “score” field. Since we score the book when the page loads, we know that this field will be populated by the time the user clicks the “Record Score” button (we can run into timing problems scoring and reading the field in the same script). Here’s what the dialog box for the “Score Quiz” button looks like:
    The score field then looks like this:  Score: 80%. We can change this format via “Generic Runtime System Prompts,” but we’ll just deal with parsing the score we want out of the text. Here is the updated action for the “Record Score” button.

    We first read the text of our Score field to get our hands on the “Score: 80%” value. We then parse that to set the scoreVal variable to 80 (or whatever score the user has of course). We divide that by 100 to get the scaledScoreVal variable. Note that this example is for SCORM 2004, which uses a scaled passing score. If we were doing SCORM 1.2, we would work directly with scoreVal and read “student_data.mastery_score” in order to get the raw passing score. Next, we call LMSGetValue with the parameter “cmi.scaled_passing_score”. We store the return value in passingScaledPassingScore. We then compare this value to our scaledScoreVal variable. If the user scored high enough, we set the completionStatus and successStatusvariable to completed and passed respectively. These default to incomplete and failed. If you were using SCORM 1.2, you only need a single completion status. Next, we call LMSSetValue with these parameters and values:1. “cmi.completion_status” and completionStatus. 2. “cmi.success_status” and successStatus. 3. “cmi.score.min” and 0. 4. “cmi.score.max” and 100. 5. “cmi.score.raw” and scoreVal. “cmi.score.scaled” and scaledScoreVal. Note that ToolBook by default sends a cmi.score.max and cmi.score.raw that correspond to the actual scores (e.g., 35 for a max and 28 for a raw), but it actually preferable in all cases I know of to use the normalized values out of 100. Note also that you need to use the SCORM 1.2 versions such as cmi.core.score.max, but otherwise the logic would be the same. Finally, we exit and discard results.

This turned into quite a bit of work, but these changes reduced traffic to/from the LMS tremendously AND provided better functionality. I hope this is helpful to other ToolBook developers and Tracker.Net customers.

This exercise has led us to add a feature to “Ignore Suspend Data” to the Tracker.Net version 6 wish list. It would also be quite useful for ToolBook to create a property to remove question data from suspend data or skip it completely. Doing the same with bookmarks would be helpful as well.