Tuesday, 27 October 2015

My Recent Projects

Rather than blog about my recent work, I thought I'd try Google Sites. I'm not exactly sure why, but a Site does have the ability to break things down into pages, rather than creating ONE HUGE blog post, or an unconnected stream of related posts.

There's stuff about Google Apps to create web applications or improve processes,  Livecode multimedia authoring and a little visualisation stuff.

Anyway, here it is:


Tuesday, 13 October 2015

Reflections on Wuthering Bytes

A week or so ago I went to the Wuthering Bytes conference in Hebden Bridge. The event had a maker/hacker/thinker angle I was keen to snoop into, to find out what people were up to in this strange world that seems to have blossomed whilst I wasn't looking.

The presentations began with Prof. Danielle George, who I heard being interview on Radio 4 on the way in to work about her work with radio engineering... from looking deep into space, to controlling jet engines to monitoring field moisture levels for effective agriculture.

Next up Stephen Jagger, gave us an amusing history of an audio engineering company, making microphones and mixer desks for the BBC. My favourite part was about the white lie they told to get their biggest gig. 

Leila Johnston, in unbelievable shoes, shared her work with Hack Circus, a publication that tackles a "geek" subject, but strangley, before it is published always mutates into an art event, and focal point for "community outsiders". 

Towards the end, Leila started drifting into startup self mode, which I loved... including nuggets such as "PUT YOURSELF OUT OF CONTEXT" ... and "USE WHAT YOU HAVE GOT NOT WHAT YOU WISH YOU HAD". I found these messages strangely profound. The mantras kept coming, "BE SUSPICIOUS OF ALL DELAYS".

Jeremy Ruston, the inventor of TiddlyWiki, was a hoot. Like Will Self in voice and laconic langour ( get me trying to use big words Will Self stylee) , he shared a sort hippy take on the last few decades of computing I liked. "NOT EVERYBODY IS LIKE US". ... and "TECHNOLOGY IS CUNNING MADE CONCRETE".

I liked his schtick, the title of his presentation was "Hackability as a Human Right", and he explained how part of the power of "the hack" is that the cost of failure, both in emotional and fiscal terms is minor. You may lose some time and effort, but that's all and when "the hack" works it can be beautiful and flourish. And, to "not hack" is an abdication of "what matters".

Good schtick.

Christine Farion, "the bag lady" demoed her anxiety reducing handbag that shows when the right RFID tagged items are in it and shared some of the history of attempts to use technology to be an aid to our everyday lives. Her research has taken her down the road of ever simpler designs, to the point where they sort of just work the way you'd expect they would. She finished with the line "EVERYTHING IS POSSIBLE - I JUST DON'T KNOW HOW RIGHT NOW".

David Hayward shared his journey through and beyond the world of making video games. There seemed to be a route emerging whereby people aim to be part of big (and literally very bad) industries, get battered whilst making a life and then find more artistic and event-oriented careers.

David shared video games that just didn't make sense revealing a sub culture of people who don't care about shoot 'em ups or driving games, and whose shaping cultural influences weren't Star Wars and Lord of the Rings.

He gets asked "What's your business model?" a lot.

Jennifer Crawford shared battle stories of working with victorian printing machines, and cracking out startup ideas and approaches. So, using the old hot metal typesetting slugs, they now emboss moleskine notebooks, or they utilize Etsy to make prints etc.

But again, they're actually taking an old 2 tonne typesetting machine out on the road as an event/experience. Catch them at the next Ideal Home show apparently.

I reckon they'll get asked about their business model too.

David Mills is a guy who can take a rolled up and stuck together parchment scroll, and using a CT scanner take "slices" through the photo, measure the thickness of the ink. The clever bit is using software to take these spirals, flip them around and thus be able to ACTUALLY READ THE WRITING ON THE SCROLL. Incredible. You may have seen this on the BBC a while ago.

He also X rays his lunch every day and posts it on Twitter.

From I CAN MAKE, Chris Thorpe shared their educational 3D printing projects, but most interestingly defined the difference between hackers or makers or geeks or whatever you want to call them, and NORMALS. I loved their print a working Tower Bridge project, which they run as a lesson IN Tower Bridge. I loved how they created lesson plans and not just 3D models and had a philosophy of "DISRUPT TEDIUM". 

The day ended with Eva Pascoe, who I didn't know I knew ( of ). She was behind Cyberia, the internet cafes way back in prehistoric times. 

I liked it when she spoke of the important of Trust, got everyone to get their phone out, mentioning privacy and Facebook then told everyone to pass their phone to the person beside them and let them have a rummage around. Ha! 

Eva had battle scars from the Cookie wars and sang songs of how the commercial world defeated the engineers,  and lamented how everyone seems to "be resigned to it".  She continues to work fighting for who gets access to data, and is contributing to a World Magna Carta / Bill of Rights for the internet world. 

Thursday, 17 September 2015

Apps Used in York's Archaeology Data Service

Following a short presentation about online apps we're looking at at York, Michael Charno got in touch and said..

The following are apps that we use at the Archaeology Data Service:

* Asana [https://asana.com/]: Its a really simple task management app
that enables task allocation, commenting, prioritising, creating
deadlines, etc. Its free for use amongst 10 colleagues, so we've
been fine with it so far.
* New Relic [http://newrelic.com/]: Systems analytics software for
understanding where problems exist in servers/web
applications/interfaces. Obviously more useful for people managing
servers or web applications, so might not be widely useful. However
if the university was going to get a license we'd happily join in!
* Slack [https://slack.com/]: We used the free version but quit after
we found ourselves moving to the 10,000 message limit quickly and
didn't want to purchase a license. We haven't replaced it, but would
certainly start using it again if the university was going to get it.

It's not the first time someone at York has mentioned Asana to me. I went to the tool, logged in with my York account and it tells me that 277 York members are already there (including Dan and Paul from the Web Office). After a quick look, I do like the simplicity of Asana.

Slack is like a twitter for your team application. Anyone else tried it or like it?

Wednesday, 16 September 2015

My Reflections on ALTC 2015

Photo credit : Chris Bull
+44 (0)7968440920

Last week I spent three days at the Association for Learning Technology Conference (#altc) in Manchester. 

It’s been a good few years since I’ve done the conference thing but I was looking forward to totally immersing myself in ideas and learning from people's’ experiences.

The conference themes were:

  • Harnessing the power of the crowd – collaboration and connectivist learning
  • Learners as agents of change
  • Open educational practice
  • Participatory approaches to the development of learning technologies
  • Social media in learning and teaching

The Emergent Themes - What It Really Was About

For me, the standout sessions and the ideas that seemed to permeate most of the conference were:

  • Learning Analytics
  • Research and reports regarding User Centred Methodologies and Pedagogies
  • Novel approaches and research to learning  i.e technologies, apps, MOOCs, IoT, wearables etc

… but this of course could have just been the sessions I chose to go to. There was an overwhelming choice of sessions at any one time, which inevitably gives you the feeling that you must be missing more than you’re getting. Take a look at the sessions for day one here https://altc.alt.ac.uk/2015/programme-interactive/#/day1.

I took copious notes throughout the conference, or rather doodled a lot, knowing that when you get flooded with information it’s so easy to lose many of the small but interesting threads. I did. Even now, a week later things are still sinking in.

Learning Analytics

I’m guilty of assuming I knew what Learning Analytics was simply because I recognise the words. The conference has given me a clearer understanding of the aims and intentions people have but it’s clear that the paint is not dry on lots of Learning Analytics thinking.

All the Learning Analytics sessions were about early findings and plans for the future. Yes, there are some “complete” products available, like IBM’s Portal. Interestingly, for me, IBM seem to bundle a social tool ( think Facebook for the university ) which could be used, very obviously, to feed more usage data into the learning analytics side of things. Seems a smart-ish.

The JISC Learning Analytics initiative and tools looked very interesting. We saw wireframe mockups and finished designs of what the student app and the teacher dashboards would look like. They looked slick. But it was funny how, when you start “playing” with the app in your mind, digging deeper, all sorts of dark corners and questions start to emerge about how it would work and what it would mean and what would then follow. The ethics and oversights of Learning Analytics are as important as the how it all works. http://analytics.jiscinvolve.org/wp/

I was particularly intrigued by JISC’s  “Consent Service” component, where a student might agree to their data being used to provide them with more information or even BYOD (Bring Your Own Data, for example uploading your FitBit logs or allowing access to other online resources).

The OU suggested that aligning your learning analytics with your educational vision was the only way forward. This made complete sense and they showed how different universities had approached their learning analytics implementations.

We saw a presentation from Brocklehurst College who were working with IBM’s Portal service. They’re a year in and claim myriad improvements in retention and performance.

From Hull, @thebigparticle, noted that their Learning Analytics work had stimulated lots of discussion with students, that superficially was about the tool, but actually was about their learning, and that this was invaluable, changing students’ mindsets.

Some Random Technology, Apps and Gadgets

QuizIt Champion was an interesting quiz testing app That will be released soon and is free. http://blogs.plymouth.ac.uk/telmed/

Aurasma AR tool looks fun and powerful - and really could be used with Archaeology exhibits, for example. Think of everything in the world as a visual QR code, kinda.

iBeacons are little devices that send out “I am here, I am here” messages, meaning you can, with an app, ascertain presence or closeness to the device. With 3 in a room you can effectively do indoor GPS. These look fun to tinker with.

Makey Makey is what you need  https://www.youtube.com/watch?v=rfQqh7iCcOU when you want a banana piano. No seriously.

http://goanimate.com/ is being used for storyboarding.

If you haven’t seen the horror site, Take This Lollipop, and you use Facebook, give it a whirl.  http://www.takethislollipop.com/

Student Centered Processes: @GCDigiTech

Gloucester College had a nice project where they asked “Student Innovators” to trial and review apps to tell other students if they’re useful or not in their studies. The students blogged their thoughts here. http://gcstudentinnovators1415.blogspot.co.uk/

The created an “App of The Week” category and used the blog to provide some peer support. Recommended apps included Grammarly and RefMe.

This interesting part of this was how students were researching the tools and ways they wanted to work, making suggestions and recommendations to their lecturers and each other, and discussing with each other the best approaches.

To Wrap Up

I headed off to ALTC expecting to learn a lot and I learned more than I expected. I learned that full on three day conferences are knackering, I’m getting old, and concentrating and listening hard for that amount of time is taxing work. I will of course, be suggesting a meditation zone with sitar players, hot tubs and champagne to the organisers for next year, back me up people.

Wednesday, 1 April 2015

Getting the Edit Link From A Google Form Response

This code snippet may be of use to someone. I couldn't find an easier way of doing this. You need to familiar with Apps Script and Forms to get this, sorry.

When you have a Google Form saving responses into a Google Spreadsheet, at times it would be nice to be able to give people the link to edit their form ( later ).

So.. in your onFormSubmit(e) function you need...

var timestamp = e.range.getValues()[0][0]
var row_num = e.range.getRow()

If you get the value from e.values or e.namedValues it doesn't work. No idea why... it may be something to do with millisecond precision or the space/time continuum... dunno.

Then you need a function like this...

* Gets edit link for a response's timestamp
* @method render_text
* @param {string} Form ID"
* @param {timestamp} A date
* @return {string} A URL.
function get_edit_link_for(form_id, timestamp){
  var form = FormApp.openById(form_id)
  var formResponses = form.getResponses();
  // Let's work backwards through the list, should get there quicker...
  for (var i = 0; i < formResponses.length; i++) {
    var formResponse = formResponses[i];  
    var response_timestamp = formResponse.getTimestamp()
    if ( Number(timestamp) == Number(response_timestamp)){
      var url = formResponse.getEditResponseUrl()
      return url
  //If it gets to here, it ain't found it. Boo!
  Logger.log( "Error: get_edit_link_for") 

...You then just... 

var edit_url = get_edit_link_for("YOUR_FORM_ID", timestamp)
sheet.getRange(row_num, YOUR_COLUMN_NUMBER).setValue( edit_url )

...and save it in your onFormSubmit function and it's there ready to send out in an email if need be.

Hope this helps.

Friday, 27 March 2015

The Rebirth of Authoring? Making iPad apps with LiveCode

I recently was asked if I'd heard of an authoring tool for creating iPad apps called LiveCode.

I don't know if you know, but LiveCode is like the only living grandchild of a tool called HyperCard. And you might not also know, that like the post punk band, Killing Joke, HyperCard is one of those subjects that makes me want to sit you down and tell you stories from days of yore. For EVER.

You see, HyperCard was the tool that I first did any programming with, back in 1991. The Apple Macintosh was billed as "The Computer For The Rest Of Us"... and HyperCard, bundled free with every Macintosh was "Programming For The Rest Of Us".

But before I can regale you with tales of HyperCard's features, you need to understand the context from which HyperCard sprang. You have to do some homework.

Your Homework

You have to go back to 1945,  a the time when Vannevar Bush outlined how a conceptual HyperText machine called Memex would work, and how trails would be created through linked documents. Fascinating!

You have to go to 1960 and learn about Ted Nelson and what the word "Intertwingularity" means.

Then it's back to 1968, and to Doug Engelbart's Mother Of All Demos, in which he shows off the inventions of a Graphical User Interface ( GUI ) and a mouse and networking and collaboration. Watch this video ( it's a bit slow but edifying ). It astonishes me how many ideas we still haven't grasped yet in what he was working on.

It's now the 1970's when Randy Smith demo'ed the Alternate Reality Kit, and you start to see the importance of the direct manipulation of objects on screen.

It was the work going on at Xerox Parc ( Doug and Randy's amongst others ) that Steve Jobs saw, got inspired and went off to make the Macintosh.

And then here in 1984, here is HyperCard's maker, Bill Atkinson, demoing HyperCard. What I find fascinating about this video is how awkward the terminology they use for what's on screen. It is so new, they don't really know how to even talk about it.

With HyperCard you created stacks of cards, and on those cards you put buttons and text fields and graphics... and in those objects you put simple code that did wonderful interactive things.

People used HyperCard to make all sorts of stacks. From tools, to stories, to adventure games to learning materials to anything.

The Demise of HyperCard

And then it was no more, for all sorts of reasons. Steve Jobs closed down HyperCard ( I imagine mainly because people made some really ugly stacks ) and the web has just happened. The whole concept of authoring seemed to fade away. The hypertext of the web was very different from HyperCards stacks. 

The notion of authoring limped on awhile. There was a commercial (and colour) version of HyperCard called SuperCard. Oracle created a tool called Oracle Media Objects. Macromind made Director and then Flash. But none of these grabbed the collective imagination like HyperCard did.

A few years ago I heard of a HyperCard-like tool called Revolution from a company called RunRev. I tried it, liked it, but couldn't quite see the relevance of writing stacks anymore. And I think Revolution evolved into LiveCode ( correct me if I'm wrong ).


But now, with LiveCode, everything has changed. There's a Community version ( free ) and your LiveCode stacks can be saved as iOS or Android apps with HTML5 under-development. Your stacks can talk to your servers. Your stacks can embed browsers. Authoring is relevant again.

I ran LiveCode up and within minutes, had created a simple interactive map of the Museum Gardens in York, that presumably I can compile and run on an iPad tablet. It felt like the "old days", creating cards, adding buttons and adding code like this...

on mouseup
      go to card "Prairie"
end mouseup

...how easy is that? 

The Project

The project we are hoping to use LiveCode for, involves Archaeology students exploring Paris, and creating interpretations of what they find ( textual, audio and images ) and then designing an app to share that work. The department has a number of iPads which will enable them to use their stacks in the location. It's likely that the stacks will be authored on a variety of devices ( Mac, Windows, Android ).

Ideally, I would have liked a tool that saved an app as HTML5 ( this feature is in production at LiveCode ). That way, there would be no need to work with Apple's App Store ( and its Ad Hoc licensing ) and the students' work would also work on the web and on an Android tablet ( should they have one ). 

The Alternatives

Whilst looking for HTML5 tools, it's worth mentioning a few that may be of use later.

Maqetta is a browser-based open source project to create HTML5 content. It looked promising but the interface beat me. I was unable to use states to change the scene.

Animatron uses a timeline metaphor, an again, looked promising but probably is more aimed at animation than simple stack creation. A great tool but not quite the right tool in this case.

Silex has a beautifully simple approach to creating HTML5 content ( to which you can add HTML that does the bits that it maybe doesn't do, for example, play audio or video ). I actually wanted a little more control than this tool gave me, but I did manage to make an interactive HTML5 piece with it in a very short time.

Apple's iBooks Author is worth a mention, in that you can create rich media documents with images and videos that can then be viewed on an iPad ( via the iBooks store ). It doesn't really suit creating interactive media though, being more based on creating a page by page stream of information. I must say I was put off by the whole "you can't publish anywhere else" licensing issue Apple have decided to give to iBooks Author content. Especially when compared to LiveCode's obvious "community" goodwill feelgood factor. 

I also considered Hype, the excellent HTML5 animation creator, but again, this tool's metaphor ( creating animations on a timeline ) could be pressed into creating hypermedia, but would be a bit of a stretch.

Both of these would have required Macs to author, which wouldn't have been ideal.


It's great fun to be playing with a distant relative of HyperCard again, it is so easy to create things and I look forward to working out how to compile and run stacks on iOS ( which means we'll have to get an Apple Developer account at $99 ... sigh... still ).

I Want To Improve My Spreadsheet

I often get people coming to visit me who have a spreadsheet they want to get more from. They either want to automate certain tasks, or create new sheets with aggregated data or share data with colleagues in new ways. The hope is that with a little bit of code, new vistas will open up.

Often the data is in a spreadsheet, it isn't clean enough to do anything useful with. If code is to stand a chance at making a spreadsheet more useful, then the data itself needs to be "code ready".

Below is an actual spreadsheet brought to me, with number of areas for that needed data cleaning.

As we worked together, we realised, a healthy spreadsheet isn't just about making sure your data is logical, there are also other factors that contribute to how easy your data will be to work with.

  • Use formulas well - A few easy to learn formulas can significantly ramp up what you can do quickly with data. It is worth investing even just a few minutes learning new formulas and what they can do for you.
  • Prevent errors - Make sure you validate data where you can, and help people not to make your data grubby.
  • Improve the interface - With Google Spreadsheets you can add menus, actions and buttons and even sidebars that can turn a spreadsheet chore into a breeze.
  • Use the charts and visualisations - Getting more out of your data can be as easy as creating a well designed dashboard using Google Spreadsheet's inbuilt charts.

Here is my list of suggestions for how to make this spreadsheet's data "code ready" in a Google Doc.

There are heaps of short videos on Google Gooru's YouTube page. In minutes you can be learning new features and taming those scary spreadsheets.

Stunning Student Work in the 3Sixty at York

Sara Perry taught the Archaeology module to design an exhibition for the 3Sixty space at York. This year the students really went beyond all expectations and produced some stunning and innovative work that made full use of the space's abilities.

One piece, about Clifford's Tower, made use of numerous live action actors to deliver snippets of spoken word from the time. The presentation had moments where spotlights illuminated the actors in the room ( see below ).

When this piece put you in a 3D model of Clifford's Tower, slowly flying around it, it was actually breathtaking.

What is shown here is only a facsimile of the real experience ( of course the actors aren't acting for us in it ) but it does give you some idea of how well the piece was choreographed and how professionally the students wove their ideas into a compelling experience.

I was involved in helping to take their work and make it viewable using a Javascript 3D library called Three.js.

You will need Chrome/Firefox to view the piece: here

Remember, this is without the real actors. The experience is about 6 minutes long and was designed to integrate with offline museum activities and exhibition spaces.

Preparing Media For The 3Sixty Space at York

In a recent student project to create archaeological exhibitions in the 3Sixty space we needed to look at how to easily chop a very wide movie into four separate smaller movies.

There are lots of templates to help you present in the 3Sixty space, including Powerpoint files but we also needed a way to view the presentations NOT in the space itself which would require some form of 3D version of a 2D presentation shown in real 3D. Are you keeping up? We needed a version of the presentation that could be viewed on screen rather than in the room.

I found a python library that would let you edit videos using code called MoviePy. It's brilliant! You can do video-in-video effects, split panel videos, animations, freeze frames and all sorts.

So, with the code below, we were able to take a VERY WIDE movie generated by the Powerpoint template being exported as a movie... and make four separate movie files, one for each wall.

from moviepy.editor import *
from moviepy.video.fx.all import *

movie_file = "/Library/WebServer/Documents/Three.js/ExportedFromPowerpoint.mp4"
w = 1440 #3840 #width of full movie
h = 244 #600 #height of full movie
s = w / 4 #individual screen size i.e 960
print "Chopping..." #.subclip((0,0.0), (1,10.0))

clip1 = (VideoFileClip(movie_file))
wall1 = crop(clip1, x1=0, y1=0, x2=s, y2=h) #Wall 1
#wall1 = wall1.without_audio()
wall1.write_videofile("wall1.mp4", codec='libx264')
print "Chopped: wall1.mp4", wall1.duration, "seconds long." 

clip2 = (VideoFileClip(movie_file))
wall2 = crop(clip2, x1=s, y1=0, x2=s*2, y2=h) # Wall 2
wall2 = wall2.without_audio()
wall2.write_videofile("wall2.mp4", codec='libx264')

print "Chopped: wall2.mp4", wall2.duration, "seconds long."  

clip3 = (VideoFileClip(movie_file))
wall3 = crop(clip3, x1=s*2, y1=0, x2=s*3, y2=h) # Wall 3
wall3 = wall3.without_audio()
wall3.write_videofile("wall3.mp4", codec='libx264')
print "Chopped: wall3.mp4", wall3.duration, "seconds long." 

clip4 = (VideoFileClip(movie_file))
wall4 = crop(clip4, x1=s*3, y1=0, x2=s*4, y2=h) # Wall 4
wall4 = wall4.without_audio()
wall4.write_videofile("wall4.mp4", codec='libx264')
print "Chopped: wall4.mp4", wall4.duration, "seconds long." 

print "Chopped: All done!"

It's worth noting that we only needed audio on one of the movies, otherwise four tracks of the same audio played causing a weird echo effect.  Also, unless the codec was libx264, the movies didn't load into the Three.js space.

After this we were then able to use the movies in a 3D simulation of the room.

See how this was used here.

Tuesday, 20 January 2015

The Solution: Rendering video onto the inside walls of a 3D room

So after a lot of experimentation, I decided that WebGL was a good way to go ( see an earlier post  about automatically showing videos on a 3D models walls).

I took the video example and simply hacked around, watching where objects move to when I changed values, and then added extra objects, in this case walls.

And it worked! Which is pretty impressive ( I think ) for someone who knows nothing about 3D programming. Here is a live version showing music I loved from the 70s.

Monday, 12 January 2015

Tools For Prototyping A Narrative

Another of the things I'm mulling is how to research and prototype a narrative of some sorts, for a student project. In the past we've used Pinterest as a gathering research tool, to collect sources of inspiration and, kind of importantly, the visual clich├ęs to avoid. We're not sure if Pinterest is the best tool to use.

Some tools you should simply take a look at for the sake of it are:

Amazon StoryBuilder to create a script or screenplay. This is like a corkboard of notes with which you develop your "story".

And then there's Amazon StoryTeller that lets you create a visual storyboard from your script. Interestingly, the tool seems to recognise "people" and places. It has a huge library of people, and scenes and props with which you can create your storyboard pages, like the one shown below where Dr Cutie gets incredibly jealous of Greg's acrobatic cows. Ahem.

The tools themselves are interesting enough, but the really interesting part about them is how, this being Amazon of course, once you have created your storyboard it you can publish it and people can vote on how much they like it, and suggest that a real video advert should be made for it.

Once your video advert is made, people can watch it and vote on how much they'd like to watch the real show. And then, if you're lucky, the show gets commissioned and it's shown on Amazon Prime Videos.  Amazon have thought about the process, from noting down sketchy ideas to actually making a movie or TV show. Wow...

Google Gallery. Look at this gallery of art installations or the way this gallery zooms in to images and text and has audio attachments.

The Problem: Rendering video onto the inside walls of a 3D room

A thinking out loud post...

The Scenario

At the university we have an amazing room called the 3Sixty. It's a room that can have media projected onto all four walls (and there's some amazing speakers in there too ).  Sara Perry runs a module in there for archaeology students to design a museum exhibition. Last year the students created World War I exhibitions using Powerpoint and YouTube videos. They were very moving. I almost cried at one about a loyal Alsatian.

The Problem

The problem is this... The students use a very wide ( four walls ) Powerpoint template to create their 3Sixty presentation, but once made, the only place you can really experience this presentation is in the room itself. It would be good if these .ppt files ( or exported movies ) could be projected onto a 3D version of the room. It's a very simple render, I think, but would allow people to see the presentations without being in the room.

Having no experience of 3D modelling, I dived in and had a go with a few tools.

Google Sketchup

This seems very easy to use. I went for the primary school version, Sketchup Make, hoping the simplicity would be useful.

I found I could easily add images to planes and that there's an extension called Video Texture Plugin  which seems to be able to render video onto a surface but it's Windows only.

Using Sketchup I exported the model as a VRML file and then was able to view the model using an app called FreeWRL.


I then had a play with Blender.  Regular plain ole 3D modelling may be the way to go.  I discovered you can add a video as a surface texture to a plane. The picture below doesn't look very impressive, but it IS a video on a wall ( try to ignore the box .. ahem). Yes, I'd need to learn how to use the software :-)


I tried an online 3D editor called Clara.io (shown below) which I failed to master in the five minutes I tried it :-) It does look incredible, although I'm not sure if I can stick a video onto a plane. This tool did have a large library of objects like chairs, cars and objectified 3D women in bikinis and thigh boots.


Traditional 3D modelling may be the route to follow, I don't know. I bumped into someone while tinkering on this who mentioned the Unreal Engine, but, like 3D modelling, it doesn't half seem a massive mallet for a very tiny nut. 

We almost need a Doom-like clone but with only one room...only simpler... (something like this maybe...)

I was hoping for something that we could automate the conversion from Powerpoint ( or video ) into an online viewable something.... something like WebGL maybe.  Like this... or this below...except instead of web pages, we might have a page with just a video in. In order to do this we'd just need to "chop" each wall of our very wide Powerpoint movie into each individual wall.

This Three.js WebGL HTML5 tutorial might be a good place to start.

Blimey, if this is possible, it must be doable. Here a video becomes a model... Wow! Like this only a million times simpler!

Tuesday, 4 November 2014

Module Chooser Using Javascript AND Apps Script

I had hoped that there might be a new Google Forms Add-on to do this, but no. I guess I'll have to have a go at doing it myself later.

Imagine you want your students to make a choice of four modules from a list of modules you're offering. To make the form easier to "not get wrong" it'd be good if when you chose module one, it disappeared from the following form items...

... like this.

The above is a hosted HTML on Google Drive ( because the Caja sanitation, or Chrome killed my Javascript ).

When a student chooses their preferred modules, the form is submitted to a regular doPost() method in a Google Spreadsheet's Apps Script like this...

function doPost(e) {
  Logger.log("Hi there")
    //Get values from form
    var email = Session.getActiveUser().getEmail()
    var module_one = e.parameter.module_one
    var module_two= e.parameter.module_two
    var module_three = e.parameter.module_three
    var module_four = e.parameter.module_four
    add_students_selection(email,module_one,module_two,module_three,module_four )
    var template = HtmlService.createTemplateFromFile('thank_you.html')
    template.this_url = ScriptApp.getService().getUrl( )
    return template.evaluate().setTitle("Thank You").setSandboxMode(HtmlService.SandboxMode.EMULATED)
    var template = HtmlService.createTemplateFromFile('error.html');
    template.this_url = ScriptApp.getService().getUrl( );
    template.error_title = e
    template.error_detail = e.stack
    return template.evaluate().setTitle("Error").setSandboxMode(HtmlService.SandboxMode.EMULATED)

Wednesday, 29 October 2014

One-To-Many Relationship in a Google Spreadsheet

It's often the case that you want and need to be creating a database to store your data, but Google Spreadsheets are just so handy aren't they? But Google Spreadsheets are very good at relational data.

Here's an example where, you want to have one column for the name of your recipe and another for the ingredients ( comma separated ).

How you use this script is you click on the cell you want to be relational and choose the Admin > Show Relationship Editor. This opens up a dialog window showing you all the options included so far. You then alter the ingredients and it saves a comma separated list into the spreadsheet.

Here's the spreadsheet. Use File > Make a copy to see it work and rummage around in the code.

If anyone can help make the UI prettier I'd be grateful, thanks.


I love it when a plan comes together. Or when someone I've been working with really starts getting to grips with Google Apps.  Tom Grady shares what he's been doing with Apps Script.

I think he might have the bug.

The Problem With Google

I'm too old to be fan of technology, but I quite like lots of it, and you can't argue that Google have definitely taken the lead on collaboration. At the core of all its products is the idea that what you are working on, you will want to involve other people, as collaborators, as commenters, as mentors or viewers.

But Google's model of collaboration is all wrong. Or rather, we've adopted Google tools at the university and although they provide the best tools for collaboration, their model of collaboration is hurting us. 

Google's model of collaboration best matches a small business and individual. This is reflected in how Google Drive works. 

For example, in Google Drive, if you create a file, only you can delete it. That's great isn't it? Except because a file is yours, when you leave the university, unless your admins move ALL your files to someone else, they're gone. 

Before leaving the university, you could individually make someone else the owner of one of your files, like this...

But that is, to put it mildly a bit of faff... and if you put your files in a folder and make someone else the owner of the folder, the files still disappear when you leave ( the files don't inherit ownership from the folder ).

And then, you might get fancy and think you could create a solution with Apps Script.  So I tried that. My idea was to create a "dropbox" and a script to watch that dropbox and when a file is added to it, make a copy ( which I, or a departmental account would then own ). It worked fine. Except of course, the script can't delete the original file - because I don't own it. So, I was left with two copies of the file, one I ( or a departmental account ) owned, and the original. Sigh! ( The code below doesn't work by the way ) .

function check_dropbox() {

  var dropbox_folder = DriveApp.getFolderById("FOLDER_ID")  
  var main_folder = DriveApp.getFolderById("OTHER_FOLDER_ID")  
  var files = dropbox_folder.getFiles()
  while (files.hasNext()) {
   var file = files.next()
   var name = file.getName()
   var new_file = file.makeCopy(name, main_folder)
   Logger.log("Made a copy of: " + name)

Maybe you could write a script to simply move your files to someone else. Except you'd have to get  pretty fancy and page through your files if your script would need more than 9 seconds running time. Whilst this might seem like a good idea, you can't transfer ownership of a document to someone at another organisation.

What The Problem Is

The problem is that Google files are so tied to an individual. As an organisation, you need to be able to have documents that aren't tied to individual, but are tied to a role or a department.

And it gets worse as soon as say three universities want to collaborate on a project together. And remember, collaboration is what Google are supposed to be excellent at. Imagine these three universities want to collaborate by sharing documents.  You'd imagine that in the course of a project people might come and go, and ideally, you don't want files disappearing when people move on.

More subtly, you don't actually want any one university to own the files ( even if this was possible, which it isn't ). What is required is a form of shared ownership.

So Come On Google

Collaboration is your thing. I know these are easy problems to solve, but you can't argue that at times, we might not want to an individual, we might want our work to have longevity beyond our involvement and we might want to work fluidly with other organisations. 

At the moment I have someone asking, "We want to set up a five year project and share documents with three organisations. How do we do it with Google Drive?" ... and unless your view of collaboration is one where the documents are fleeting ephemeral things, rather than lasting records, there isn't really a Google-shaped solution that makes a lot of sense.