Novel Visualisation Technologies with Paul Bourke


It’s kind of difficult for me to work out
when I was asked to present something on visualisation. It was kind of difficult to work out what
to present to this group, because visualisation means a wide of range of things depending
on who you are and what you sort of– your social discipline area is. And generally in
this area in the humanities visualisation tends to be around what I would call information
visualisation. And although I’ve been involved in some projects in that space in the past
I thought today I’d have to troll back over the years for things I’ve done. Instead I
thought I’d talk about stuff I’m involved in right now mostly with humanities groups
at University of West Australia. And so these are some of the people that I’ve just listed
at the beginning that I’m collaborating with them and these are not projects. The projects
that I’m going to describe are not things that are my initiatives. I’m working in a
technical capacity you know providing solutions to these projects. These are the groups and
the ones as I describe the projects it should be obvious who is who and I’ll mention them
more so as I go. I’ve got a slight sore throat so if I cough it won’t affect you because
I’m not using this microphone but it might bother the audio recording. And they’ll be
a segue to a slightly humorous story at the end related to that cough but we’ll come to
that later. So when I first saw the invite I saw that it was two hours. So, I’ve got
two hours worth of slides for you. I’ll go fast. Okay, as is obligatory with these things
I will give a very brief one slide introduction to iVEC. I’d also talk again just five or
ten minutes on kind of what I am supposed to be doing if you like, which his around
science and data visualisation and then I’ll present areas that I’ve been working in the
humanities at UWA and then mostly around data capture, image capture, novel ways of doing
that. It’s not particularly new. People have been applying this, these techniques already.
I guess what I’ll be presenting here is the fact that we’ve got local expertise in iVEC
on how to do this and they are projects that are sort of relevant to the social scene.
So, the one slide introduction to iVEC. It’s a partnership between five organisations,
the five research organisations and, of course, Curtin is one of those. And it’s basically
about, this is my definitions. You might get different ones depending on who you talk to.
But for me it’s about this facilitating advance computing across the pattern of organisations.
And that includes both hardware, so iVEC is managing super computing facilities, also
managing visualisation and data storage, but it’s also equally about the expertise in the
organisation. And there are these five programs and some of you may know Valerie who is the
education program leader. So, if you want to know about that you can talk to her later
on. There’s an e-Research program headed up by Valerie also by Ginny Harrison at UWA.
There’s an Industry in Government Uptake by Andrew Beveridge, Super Grouping in Technologies,
which is by George Beckett and there’s a Visualization Team of three at the moment. So, this is my
definition and like I say visualization is a pretty broad thing. But in the science data
visualization is really about insight, providing insight to researchers or to the public. And
it’s using advanced computing, often advanced computer graphics to achieve that. And, of
course, my mother has trouble telling her friends what I do because even I’m confused
sometimes. So, this is sort of the definition I tell my mother so she can explain what I
do. It’s kind of an exciting area to be involved in because it requires, or often requires
quite a range of expertise. So, obviously there’s a lot of computer graphics involved,
computer programming, having a good foundation in mathematics is pretty useful. But then
it goes into more creative things as well. So, perception theory and computer interface,
human computer interfaces. So, I’ll just go through a couple of previous projects around
science visualization just to set the scene for what I’m often doing and then I’ll start
into these three project areas. So, this is a visualization of computer simulations in
the weather area. In this case it’s a tornado. And this kind of has a lot of the characteristics
of what visualization in the sciences is about. It’s about this mapping of variables into
graphics. And we have a language for doing that. We use colour, hot is, red is hot and
blue is cold, some sort of scale there. We have vectors to show directions, in this case
wind directions. And we introduce you know balls that bubble around to introduce, and
I actually don’t know what that is representing. I can’t remember what that’s about. But this
is language of how you convert variables in your simulation to graphical representations.
During this talk you’ll probably see a few examples of these round images that might
seem a bit strange to you. That’s because these are used often to project into hemispherical
dome environments. So if you see circular things we don’t only generate there, we generate
stuff with flat screens as well. A lot of the examples I’m showing will be from the
iDomes. So, another example is a very common dataset in sciences and this actually called
volumetric datasets. And you give this from a whole range of 3-D scanners, whether MRI,
CT, Micro CT, Synchrotron. And this is an example of data where you have a volume of
space and within that volume you know something at every point. So, in this case it’s a CT
scan so what you know about is density. In other cases there’s water volume. And so what
visualization is about in this case is revealing the interior of these often solid objects
where they’re fossils or in this case a mummy, which has never been opened. So, this is currently
residing at Mona and Hobart and it’s a public exhibit, but this was part of the research
phase for looking at this– for looking at the interior of this mummy, looking at how
she died and whether there is jewelry and all this sort of stuff, so this sort of forensic
stuff that goes on there. And then this next example, which I think is the last one is
volumetric data but it’s not a scan. It’s not a recording of volume. It is a– it’s
a computer simulation in this case of galaxy formation form astrophysicists at ICRA. And
again it’s this mapping between what’s happening in the simulation. In this case there are
three things. There’s stars, which are the whitish stuff. There’s H1 gas, which is red
and then there’s dark matter, which is the bluey stuff. And so this sort of highlights
another characteristic of a lot of these datasets and this is a huge amount of data here. Every
time step is about 36 gigabytes and they ran this basically from the big bang until present
day not saving every time step of course. But, so I’ve got a characteristic of what
happens in the sciences is all this big data thing where you have an awful lot of data
that goes into generating these things, datasets. But in this case, the same machine that generated
the datasets is used to do the visualization just because that was the only place where
there was sufficient resources to do that. So, visualization has got that word visual
in it. So, you might imagine that most of the time what we’re doing is using the human
sense of vision to convey this information to your brain. And so you might imagine that
what we should be doing is sort of leveraging what our visual system can do and so if you’re
sitting in your office there are these three things that your desktop display, unless you’ve
got a really special one, doesn’t do. And that’s fairly obvious when you think about
them; the first is stereopsis, so we have two eyes. They’re offset. We get the sense
of depth. It’s not hard for you to imagine that if you’ve got a complicated bit of geometry
that you’re trying to understand or a complicated process that having that sense of depth would
be a valuable thing to aid in your understanding of what’s going on. We also have a very wide
field of view. So, if you’re sitting in your office you might be using 30 to 40 degrees
of your available field of view and the rest is just your office. Whereas, if you look
straight ahead and put your hands out here, you can still see your hands. So, we’ve got
an almost 180 field of view. And so again, you might imagine that there are times when
you were immersed if you are inside your dataset with your entire field all filled up, that
might be a useful thing to do. And then last thing is visual fidelity, granted if you’ve
got a retina display and you’re sitting up close you’re getting pretty close to what
your visual system can resolve, but if you’ve got a large one to one display generally the
resolution is much less than what you’ve got in sort of what your eye can do in the real
world. The resolution of that is much less than what you’ve got in a sort of a what your
eye can do in the real world. And so what I’m coming around to saying is that visualization
is often about employing displays that make use of those characteristics of that visual
system. One way of doing that is through tile displays. That’s how you get a very large
pixel count, a very large resolution. You stand back to sort of zoom out. And the peripheral
vision is something that’s done through something like an iDome but there’s other ways of doing
that as well. So, just quickly some of the tools of– that I’ve provided in this space
is around people, so there are three half-funded positions at the moment. There’s budget to
support projects at the partners and we have various machinery to assist in that visualization
process where they will add value. And there’s also various instruments that we have to solve
problems in various spaces. So, two plugs for today. The first one is Digital Humanities
2014, which I was halfway down the road and I forgot to bring it. So, I had to drive back
and get it or I would have been in trouble from the– by Ginny who’s heading up this
conference. There is an international conference happening early next year. And the second
plug is to Curtin who’ve got a pretty exciting facility coming on line and some of you may
know about this. I mean if you have questions about it Andrew Woods is sitting in the audience
and can field any questions you might have. So, this is a pretty exciting space going
in at the John Curtin, in the John Curtin Gallery. And it’s going to have a number of
these machines which facilitates this visualization process. I’ve listed some of them there and
it’s going to have– it’s in a gallery area so it’s going to have two modes of operation.
One will be a research mode and the other will be an exhibition type mode. And there
are these various targeted applications at the moment. And I don’t know if I was supposed
to put the November launch date in there and how solid it is, but that’s the plan for a
launch in November. Okay, so the three projects or the three discipline areas, three capabilities
that I’m going to talk about as from the beginning is going to be high resolution photography.
The second one is around 3-D reconstruction of objects from photographs and the last one
is in the cultural heritage space around 360 video and for each of those I’ll show a number
of examples. So, starting off with sort of high risk capture and the thing that all these
have in common, I guess, or at least I’m pretending that they have in common is that they were
developed to solve problems in other areas in sciences or in engineering. And they are
increasingly being used in a broader, in a broader sense. So this is about realising
that if you want to capture something at a higher and higher resolution, if you want
to photograph something at a higher and higher resolution, you can’t just go out there and
buy an arbitrarily high sensor camera and stand sensor. So, we’re currently limited
and that growth is quite slow over the last five years. And it’s also quite important
to realise that having an even higher and higher resolution since it may not get you
anything because you may not be able to afford the optics and the lens that’s going to go
onto that. So, how does one do this? It’s fairly obvious. You just take lots of photos
so you want to photograph something at a particular resolution. You just scan your photographs,
stitch those together with algorithms that are pretty well established in computer science
and you have essentially an arbitrary high res. So, why do you want to do this? So, in
the projects I’m going to show you some of them are around going and capturing imagery
from sights that are hard to get to so you can’t just arbitrarily drive there and get
out and take, capture your research data if you didn’t get it the first time. So, there’s
access and its kind of– all of these things are kind of around if you’ve got the opportunity
to capture something. You better– you might as well do it and generate these richer resource,
research objects then sort of doing it in sort of simpler ways. So, this has been used
for a long time so most of you will know what the Hubble is. And you may have seen the Hubble
deep field photography which is pretty amazing because this particular section here, if you
hold your hand out, and have– imagine what one millimetre is on the sky, one millimetre
square; that’s what this image is and there are 10,000 galaxies already and they are limited
in how far, how deep they can see. So, it’s pretty amazing for all sorts of reasons, but
this is a mosaic of whatever you want to call it, of something in order of 300-350 images
to form this one image. It’s being used fairly regularly in optical microscopy. Again, this
is a four by four, in this case it’s the opposite to astronomy. It’s down at the microscopic
scale. And this is an optical microscope that takes four of these in a grid and they stitch
that together. And we’re also doing work with geologists. This is again going out to remote
areas. This is a 20 meter long strip that they’re interested in. You can’t get the camera
far enough anyway to catch the entire thing, but they also want this across scale resolution
or high resolution so they can do cross-scale studies and this is another way of achieving
that. So, the first project that we applied this two was a place called Wanmanna, which
is there. People talk about Perth being the most isolated city. Well, got to Newman sometime
and see what that’s like. It’s a lovely sight. This is kind of what it looks like and there
are these two rock faces. There’s not normally water there, but we were there after some
rain and they recorded something like 200-250 bits of rock art across there, across that
sight. So, it’s kind of interesting. One of them for me, this was my first trip to sort
of explore some of these ideas with the centre of rock art research at UWA and I was kind
of surprised that at no point did they capture the entire site. They were only really concentrating
on individual bits of rock art. So, this is them clambering across rocks and as you can
see there are places you can’t get to and strewn across this is rock art. So, the technology
for doing this I guess is the other sort of take home message for the three things I’m
talking about here is you don’t actually need– the equipment is not particularly special
and there aren’t huge budgets involved in capturing this sort of stuff. So, the technique
here is pretty straightforward. You just have this motorised rig, which is this thing here.
You stick a camera on it, a lens. You choose a lens and you– this motorised thing just
takes all these photographs. This is actually a really by today’s standards, this is actually
a pretty coarse image. It’s only 40,000 pixels across. We’re currently doing them up into
this sort of 400 to 500, yea 400 or 500,000 pixels across. But this was the first sort
of exploration of this, this sort of onsite. And kind of in a simplistic way this is what
you’re getting, so you’ve got that this really high resolution image and you can zoom in
and pan across. And essentially what I’m calling sort of armchair archeology where you can
explore this site when you get back in the comfort of your office rather than in 15 degree
heat with mosquitoes and possibly crocs for all I know. There weren’t any. And so this
kind of highlights what you can do and like I say some of the rock art here is not particularly
exciting or even recognisable. This is a person’s head with a ring of many people as a headband.
So, this is still zoomed out at I think 25 percent of full resolution. So, we did this
for each of the rock faces and this is how people can explore these sights when they
get back to their offices. This is supposed to be a turtle. I don’t quite get it myself.
And of course my favourite bit of rock art from this place with the two kangaroos mating.
There were some challenges, so this is sort of where the computer science and even mathematics
aspects come in and certainly imaging and computer graphics, in that if you try to,
if you capture one of these things you can’t just save it as a jpeg okay. Because it turns
out until you try jpeg won’t let you go more than 32,000 pixels and the same thing goes
for both PNG and TIFF. There are solutions to this but the other thing you realise is
you can’t just arbitrarily open these things in Photoshop. Most of the imaging viewing
programs expect that the entire image is in memory at once. And so, if I have a 10 gigabyte
image which notionally requires perhaps 30 gigs of ram not only don’t I have 30 gig of
ram, but I probably don’t, I don’t want to wait all that time. So, there are technologies
and I’ll just skip over this, but there are technologies for managing that process where
you only get delivered the part of the image that you can see at the current time. And
you actually are all familiar with that because that’s how Google Earth works. In Google Earth
you don’t have all the textures. You’re just delivered the stuff that you can see at the
resolution that you can currently perceive. So, the next project was out of Beacon Island
so you can see I get around, which is one of the fun parts of the job. So, you probably
as some of you, most of you may be Australians and I’m not Australian, I’m a New Zealander.
So, I didn’t know anything about the Batavia when I started this, but I do now. For those
of you who don’t know about it, it’s a pretty sad story. You look at Batavia and Beacon
Island, but the bottom line is 350 or so people got shipwrecked on this island with no water
and bad things happen to the extent that the chief conspirator kind of had this idea that
a sustainable population was probably around 40-45 and went about trying to achieve that.
Anyway, that’s another story. So, what’s going to happen on this island is that these are
old fisherman huts and they haven’t been occupied for some time. And they are generally starting
to fall apart. So, instead of this island, which is now a reserve, starting to look interestingly
like a junk pile they are going to remove the huts. That’s going to happen perhaps toward
the end of this year. And the– sort of the archeology aspect of this is that it’s expected
that many of the graves, or where people were buried are underneath these huts. The shipwreck
was back in the 1600s but the reason why the graves are possibly under these buildings
is that the buildings were built on stuff that wasn’t hard coral that you could dig
and that’s probably where you would dig graves to bury people. So, the buildings get removed
and their team goes back to look for some of the gravesites. So, my involvement in this
has been to record or to manage the group that are recording how this island is right
now, because it’s going to look very different in a year’s time. And so again, that involves
this really high res photography, so we stick out on one of the wharfs or out on the boat
and we capture in this case, I think this is 200,000 pixels across and it allows you
to kind of do these ridiculous zooms in to at least those parts that you can see. And
even this I think is one to one. I think it’s still another factor of two on there. One
of the parts of this project is creating this virtual world and an important aspect of that,
which is reconstructing all the huts, putting in all the foliage, is getting good textures
and so this is going to be the basis for extracting textures because we can take one image and
that’s got information about the texture, building materials across the entire range
of huts. So, one of the– going back to that display idea, so one of the users of these
high res places for looking at this sort of information, so if you imagine, I don’t actually
know what the scale is. But if you imagine this on your computer screen, your computer
screen is probably a little tiny two or three millimetre section of that top image. Whereas,
on this display which is not really a particularly high res the Curtin, the John Curtin gallery
one will have another set of these so it will be a four by three rather than a two by three.
So, this allows you to explore these really high res images without having to continuously
do this panning and zooming operation that you do normally. So, very quickly the last
piece on this I think is something which I just threw in at the last minute because I
did it three days ago and it is applying the same techniques to sort of to– photographs
or in this case to paintings. So, this is a dot painting by this artist here and the
question was capturing this at a higher res both so that we’ve got a digital version but
also to so some sort of forensic work on it. And the setup like I say is pretty trivial.
This is one of the rooms at UWA. There’s nothing particularly you know special here. It’s all
standard camera gear, standard lighting gear. And this rig does the same thing. It scans
across taking little photographs, which we then stitched together. And so the painting
itself is about a meter wide and then you can zoom in and just see this lovely, lovely
detail. And so things that you just did not realise when you looked at the original painting
was that it’s been overpainted. So, each of these rings started off having two colours
and now they’ve overpainted it with a white. And you just can’t see that because event
this gap here is sort of sub-millimetre in real life. So, the next thing I’m going to
be talking about is 3-D reconstruction. And by that I mean not producing 3-D objects,
assets by hand which, of course, I’ve got all sorts of interpretation and stuff going
on, but creating them automatically from photographs of an object. And again, it’s a new thing.
It’s been around for a long time. In fact, some of these Australian mining industries
were actually leading this field some time ago and photographing mine pits or mine operations,
going back in a week’s time doing it again and calculating volume of material extracted.
And again it’s got a few sort of nice characteristics is non-intrusive, things quite important.
Often with these artefacts you can’t take them away with you. You have to do it inside.
Often you can’t take some of the scanning machines to the site. So, there’s a few reasons
why we want to do this. Some of the previous work in this state has been things like this,
which is a– so this is a totally reconstructed form and the whole range of photographs of
this pit. There has been research both at Curtin, ARC, Servite and UWA doing this sort
of work where you capture rock face, outcrops being applied for this analysis. So, the first
kind of exploratory work in this case for me was again at Wanmanna, so this is one of
the rock art sort of rock faces at the– at Wanmanna. And again, what the archeologists
were doing there was a very traditional thing that they do. They go and they photograph
these things up close. They take bearings. They sketch them. They annotate. There’s a
whole bunch of data that they record for each one of these things. But the idea here was
to capture a much richer, so a set of photographs to capture a much richer sort of representation
of each of these bits of rock art, mainly the rock that they’re on is a three-dimensional
object. In this case, I’ve shown five images. I think there are about 10 total that were
taken of this structure. And then this is what you sort of end up with is a 3-D, a 3-D
model. So, certainly a much richer sort of research asset to have than if you like a
whole bunch of photographs. This is a use not necessarily obvious, but this is sort
of a 3-dimensional computer mesh and it’s got the original images as textures on top
of that. This is kind of a surprising one because this is just three photographs. So,
you don’t even have to take a lot of time and in some cases three photographs of this
particular structure and you can reconstruct this 3-D model of it. And in case it’s not
obvious, this is entirely automatic. There’s no human intervention here except taking the
original photographs. So, this is I can’t help myself, but this is 6000 years old, so
it’s got to be an alien right. So, big head, bug eyes and three fingers or four fingers.
Okay, so again this is a project not in west Australia. This is Hong Kong, Dragon Gardens.
If anyone here is a James Bond fan this was where about 30 minutes of “The Man With the
Golden Finger” was filmed. It’s an absolutely fantastic garden area. So, unfortunately Hong
Kong has pre-run non-existent Heritage laws or there’s no way to really– to formally
say that something is a Heritage site in Hong Kong. And if you have also, the top image
is from the movie and so this thing hasn’t changed much since 74 and there’s the bottom
image there. And, in fact, the little round, this little round thing here is where the
husband and wife who developed this land were buried, sorry are buried. Here is known as
Development 5A and unfortunately this is Development 5B. So, there’s a big push to try to preserve
this particular site as something significant in the history of Honk Kong’s development.
The person who developed this land was a major industrialist from the past and it’s rated
as something like a million U, a million Hong Kong dollars per square meter, which is what
200K per square meter or something. Anyway, so the– one of the, oh one of the activities
to both convince people that this is a site worth protecting, but also in the back of
their minds if they lose they’ll have a digital record of many, of most of the structures
and sort of artworks here. So, we kicked this off about four months ago. It’s called Dragon
Gardens. There are dragons everywhere. That’s actually a light fitting. This is a ceiling
and the little red thing there is actually a little red light bulb, so you’re looking
up at this. And this kind of gives you that sense of so this is what the 3-D model looks
like. And there’s a mesh behind it and then the mesh is textured and this is the final
result. So, again a very rich sort of representation of the ceiling which we hope will not be destroyed
in any years’ time. This is just another staircase. These are both the 3-D reconstructed versions.
This is just showing you the mesh over here. Those examples that I showed you there, if
you imagine a kind of two and a half D, they’re not full 3-D object, but you can do full 3-D
objects. One of the little lions that’s sitting in front of the mausoleum part and it was
actually– it turns out that it was only after I got this back that I realised that he’s
actually got his foot on a little baby lion. Oh well, it doesn’t matter. Okay, so then
just the last bit of this, which will also lead into the next bit on 3-D video was a
project about two months ago where I finally got to go bush in the outback of Australia.
So, this was a– it’s a project out to the Australian National University, which is about
recording the Ngintaka story. So, this is a story emanating from sort of– it’s pretty
West, near Uluru sort of the border of northern territories of West Australia and South Australia.
And just a fast forward of this story, which is not the same as sitting around a campfire
with an indigenous leader telling you, but basically the grinding stone in Ngintaka’s
village was pretty poor and their meal was rough as a consequence. But Ngintaka could
hear this lovely smooth grinding stone in the distance so he decided to go and find
this nice grinding stone. He found it in a neighbouring, took him three or four days
walking but found this tribe that had this lovely grinding stone. He endeared himself
to the tribe, including the chief’s daughter, but that’s another story. And would go hunting
with them, but then one day chose, pretended to hurt his foot so he couldn’t go hunting.
Everyone else went hunting and, of course, he grabbed the stone and scuffed off with
it. The villagers returned, kind of put two and two together and Ngintaka’s missing, grinding
stone missing so they went after him. He denied all knowledge. They persisted and eventually
killed him and recovered the grinding stone. There’s a lot more detail there if you here
it told properly. But the story is told across the landscape so besides the rest of the team
that were recording you know video of the people telling the story at each of these
locations and the dances that go along with these stories at each location, I was recording
these high res images of each of these nodes. In fact, in this case this is supposed to
be Ngintaka’s head. I didn’t say, so Ngintaka is kind of a lizard. So that you might imagine
with a little imagination and eye fuzziness that this is a lizard’s head sticking out
of the rock face. Yes, so an important part of this story is the grinding stone. So, here
is a photograph of the grinding stone, an example of a grinding stone. And so capturing
this, so this is leading up to a exhibit that’s going to be in the South Australian museum.
So, part of the interest was to capture 3-D assets that they could use in virtual worlds
for this exhibit. So, this is a grinding stone and this is a 3-D reconstruction of it. But,
kind of a better vision is this, because although some people may see artefacts and that’s a
pretty good representation I reckon of a– of the original thing. So, there’s the original
and there’s the computer reconstruction version with no human intervention. There’s another,
there’s a couple of other complicated objects that are part of this story and one of them
are these headdress things that they wear. So, this is an example here and they are extremely
complicated. It has to– it can’t be taken away. It stays there. And it would be very
hard for a human modeller to sort of make a computer version of this for the exhibit.
So, this is the result of reconstruction. Now this is not perfect. There are a few sort
of extraneous things going on there, but this would be sort of the first pass of something
which a human will go in and clean up the details of. This is a full 360 object. This
is thicker in the middle, of course, because just to hold it, it’ll be removed from the–
so the last one is around 360 video. And in the usual, although this technology was originally
developed for things like security, so you can put one of these cameras in a building
and then the person on the security desk can see everything that’s happening at once in
a single camera shot. Those two examples that I’ll talk about will be again recording one
of these Ngintaka stories and then also this Mah Meri ritual in Malaysia. So, the camera
that we’re using there are others. This is still about the best around in terms of resolution.
There are cheaper ones but this is pretty much where the technology is at the moment.
And like I say, the main applications are around security or the main applications before
the cultural heritage stuff is about security. We’ve done some projects with Rio around remote
operations. ECU are currently doing some work around performance of teachers and school
rooms and I’ll show you some orchestra sort of work in that area. And then sports science,
I’ve been doing some exploratory work on using these immersive displays which you need to
capture some data for, for presenting sport scenarios and doing evaluation of lead athletes.
So, this is a performance sort of evaluation. This is just a single shot just to show you
what comes off the camera. And if you think about what an Earth map looks like, this is
the same sort of projection. You might imagine that if you walk off here you come in here.
I don’t know if you can see anything wrapping around, but that’s the end of this same structure
here. So, the camera is capturing 360 degrees. It’s capturing pretty high resolution as a
video stream. And then the North Pole is stretched across the top so that’s why it looks distorted.
But then if you look at this in the correct way it doesn’t look distorted. It looks quite
natural. So, one of the nice things about capturing the video like this is the way you
can repurpose this. So, one of the devices that will be in the John Curtain Gallery is
a cylindrical display. So, this is one way of capturing material. You can take a stand
and video camera and record stuff for that display. So, if you record everything in this
room, I put the camera in the centre and record everything. You can then extract out these
other assets from that. So, this is a cylindrical projection and these are the types of displays
that you might present that– present that in. And, of course, you can do the same thing
for a dome environment. Again, you’ve captured everything so generating a fish-eye is only
half of everything. So, you can generate an arbitrary number of fish eyes and this is
sort of the iDome and sort of other opportunities for how you might present this material. The
remote operations was around iron ore ship-loaders at the Karratha Port and so the requirement
here is that the operator needs to see everything that is happening. So, the operator is normally
sitting out– over top of the ship and can look any direction. And so they’re going to
put cameras in there and they need this same ability. And doing that with street cameras
has problems with the operator sort of being able to identify which camera comes after
which camera. But also, it’s not a seamless– it’s not a seamless image and some of the
sports line stuff, which again is about presenting sort of immersive content. Okay, so the last
part of the Ngintaka story is this cave. This cave is actually where– is Ngintaka’s belly
so they’ve killed the lizard and this is his belly. And per my last shot, the images that
you’re going to get here are 360 this way and 180. This is a still. The ladybug only
captures down to about minus 140. So, this is a quick bit of footage from that. So, these
are the storytellers over here on this low resolution screen and this is the dance that
occurs while they’re telling the story. And this makes it obvious that this is a 360 because
as he walks off here he comes in over here, because that sort of splits here. And this
is how you might look at that inside the iDome environment. Sorry about the flashing; that’s
some sort of interference between my camera and the projector. It’s not in the real thing.
Of course, an iDome, an environment like this presents with 180 degrees and we have 360
degrees. So, you can see you can navigate in this environment to look around and concentrate
in what you’re interested in concentrating on. Okay, so to finish off another project
from earlier this year, as I mentioned. I haven’t mentioned it yet, was the Mah Meri,
this remote tribe in Malaysia, West Malaysia and again, this was a cultural heritage project
too from coming out of the University of Malaya to record the, this ritual that they do. It’s
about healing. So, basically someone in the village gets sick. The priestess wise woman
of the village goes into a– goes into a special room, has a dream, imagines what the evil
spirit of the sickness looks like, conveys that to a carver who creates a mask of their
spirit. They put the mask on the patient, then they perform the ritual, which I’ll show
you in a minute around that patient. They then take the mask, put it in the river and
hopefully the patient gets better. So, it was about recording both the masks, which
as I’ve just described there aren’t any in existence, because the only ones that people
have are the ones that have been washed up on some shore, because they get rid of the
masks. But also, to capture this ritual that they do around the sick person. So, I’ve shown
you some panoramic work and I remember 15 years ago when panoramas were still new and
I would travel the world. I what I was going to do, I was going to be a real geeky guy
and take panoramic photos because that’s what was new at the time. And unfortunately, I
found myself doing the same thing with 3-D. So, I travel somewhere and I see an object
that I’d like to talk– I’d like to show my wife, so I take multiple shots, go to the
hotel, reconstruct it and send her the 3-D models. That’s kind of– so the new sort of
tourist thing. And, in fact, it’s already happening. There’s a couple of initiatives
already to build this sort of reconstruction into mobile phones. It won’t be as good as
you can do if you do it carefully, but that’s already happening. Anyway, so this is this
dance that they do around a central pillar, which is where we put our camera and there
is this wonderful character, who is this guy with his mask who does this dance, which would
normally be a dance around the patient. So, this is some of the footage. It’s really nice
stuff. So, we capture this 360 stuff, which will be viewed inside a probably a cylinder.
And we always has a first person point of view sort of over the dots as well. And like
I said before we can repurpose this. So, we can create dome versions for it and this is
again how you might experience it in the iDome. Okay, so just to finish on so this hut here
is where the priestess does her thing when there’s a sick person with some herbs and
stuff that she takes and puts her into a suitable state to imagine what the spirit might look
like. And, of course, you’re not allowed to go into that room. It was a curse and none
of the locals will go in there and I totally respect that. Well, I respect it while the
locals are there. But when everyone left I couldn’t resist and the room was quite nice.
[laughter] It’s got all these artefacts and this sort of paraphernalia for generating
whatever is generated. And, of course, my colleagues who were perhaps more sensitive
than I was to cultural work telling me I was going to get into trouble for this. And sure
enough, I got sick the next day and this sore throat is two months old from that. [laughter]
So, thank you. ^M00:44:39
[ Applause ] ^M00:44:46
[ Silence ]

Leave a Reply

Your email address will not be published. Required fields are marked *