- David Walsh (Frontier Developments)
- Richard Evans (Little Text People)
- James Fairbairn (Media Molecule)
- Richard Newcombe (Imperial College, London)
- Guy Davison (The Creative Assembly)
- Rob Pieke (The Moving Picture Company)
- Michel Valstar (Imperial College London)
- Ian Ballantyne (Turbulenz)
- Marc Hull (Frontier Developments)
Games development is a fun place to be
The talk looks at the changes already happening in the games market, including the rise of new hand-held machines, what that means for the future and the diverse and stimulating career opportunities for graduates.
Little Text People
Richard Evans is an AI designer, currently working on a text-based multi-agent people simulation, collaborating with Andrew Stern (Facade) and Emily Short (Galatea). Previously, he was AI lead on The Sims 3 at Maxis/EA, and, before that, the AI lead on Black & White. He is a regular speaker at GDC and at UC Berkeley's Social Ontology Group.
"Declarative Modelling of Social Practices using Exclusion Logic"
People simulations require simulating multiple concurrent social practices. (See e.g. the multiple concurrent games in Facade). How should these concurrent social practices be modelled? Typically they have been implemented in some sort of imperative programming language (e.g. ABL). This talk will describe a *declarative* language for describing social practices. I will show it working in a real-time multiplayer game.
First, I will motivate the need for concurrent communicating social practices in people simulations.
Then, I will motivate the advantages of a declarative representations of state over procedural languages (e.g. ABL).
Then, I will outline the formal theory of Exclusion Logic: syntax, semantics, decision procedure.
Then, I will show how to attach dynamic processes to the declarative state.
Finally, I will show the system working in a multiplayer game.
I have published a couple of recent papers on exclusion logic as a declarative representation language for multi-agent simulations. This talk will go further and show it used in a multiplayer multi-agent real-time simulation.
James Fairbairn is server technology lead at Media Molecule. Before joining MM, he worked as a sysadmin and coder in the world of finance. This (sometimes bitter) experience gives him a certain... perspective... on building and running software that serves millions of people. LittleBigPlanet was his first game.
"Share in the community: LittleBigPlanet + 30 months"
I'll be talking about some of the things we learned (technical and otherwise) about engaging with, and nurturing, our online community after shipping LittleBigPlanet. Many of these lessons can, with some thought, be applied to any game, and have the potential to improve a game's longevity and deeply affect the perception players have of your title.
Imperial College, London
I'm a PhD student in the groups of Cognitive Robotics and Robot Vision. In the past my work has included different aspects of robotics and model building including an anthropomimetic humanoid, flying robots and recently working with the new Kinect depth sensor at Microsoft Research. In my PhD research i've been investigating and developing algorithms for real-time aquisition of physically predictive world models. A major goal of the research is to enable dense surface geometry to be captured as a robot or user browses a scene with nothing more than a single camera. The success of the work is depedent on the general purpose GPU (GPGPU) paradigm shift in computer technology that has occured in last decade making vast computing resources available at low cost. The increased computational capabilities have liberated us from thinking about point, line and simpler parametric models of a scene to thinking about capturing a full dense reconstruction that be used in physically predictive augmented reality and gaming as well as in robot planning and improved camera tracking.
"Live dense reconstruction and tracking: reconstructing the game world on your tabletop"
This talk will outline a general pipeline for live dense reconstruction of scenes with passive camera technology, but will also touch on aspects that are made easier when active camera systems are appropriate. I will motivate the aquisition of dense surface models by demonstrating applications that are made possible with them. I'll also look toward the future of live scene reconstruction given the trend in increased computational power that brings many sophisticated modelling techniques into the real-time arena.
The Creative Assembly
Guy Davidson is the Tools and Infrastructure Lead at The Creative Assembly, developers of the multi-award-winning Total War franchise. He wrote his first line of code in the autumn of 1980 and after flirting with provincial theatre and corporate multimedia entered the games industry (for money) in 1997 with Codemasters, before moving on to CA in the summer of 1999.
Importing third party tool data into your pipeline
Every game developer relies on third party tools to generate content at some point, be they models, textures, samples and so on. 3rd party file formats exist to serve the tool that created them. Unfortunately, this is almost never optimal for a game engine, and so some additional processing is required. This is the data pipeline. In this talk, the speaker shall discuss strategies and pitfalls for building your pipeline.
Research Lead, The Moving Picture Company
Rob Pieke is the Research Lead at MPC in the heart of Soho, London. He dabbled in computer graphics programming in BASIC on the PCjr from an early age, but was completely hooked by the visual effects industry after seeing Jurassic Park in the cinema. After studying Computer Engineering at the University of Waterloo, Rob led a small VFX R&D team at C.O.R.E. Digital Pictures in Toronto from 2003-2007. He then moved to London to join MPC as a Senior R&D Artist on The Chronicles of Narnia: Prince Caspian, and has remained with the company ever since, developing a series of Character, FX, and Core technologies. Presently Rob is focused on investigating the state-of-the-art in computer graphics technologies, and trying to figure out what 'the next big thing' for the visual effects industry will be.
The technology that powers our global visual effects pipeline
With the recent launch of our New York studio, MPC now operates in four time zones across three continents, often sharing a project among multiple sites. While many of our proprietary software tools are directly used to realise our final imagery, MPC also has many unsung hero tools that run almost invisibly in the background. This talk will give an overview of our technology infrastructure, and look at some of the systems our artists and developers use, from asset management to the simulation of destructible materials. Some of the recent challenges will be covered, concluding with a brief peek into the crystal ball of the visual effects industry's future.
Imperial College, London
Dr. Michel F. Valstarr is a Research Associate in the intelligent Behaviour Understanding Group (iBUG) at Imperial College London, Department of Computing. He received his masters degree in Electrical Engineering at Delft University of Technology in 2005 and his PhD in computer science at Imperial College London in 2008. Currently he is working in the fields of computer vision and pattern recognition, where his main interest is in automatic recognition of human behaviour, specialising in the analysis of facial expressions. In 2007 he won the BCS British Machine Intelligence Prize for part of his PhD work. He has published technical papers at authoritative conferences including CVPR, ICCV and SMC-B and his work has received popular press coverage in New Scientist and on BBC Radio. He is also a reviewer for many journals in the field, including Transactions on Affective Computing, Systems, Man and Cybernetics-B and the Image and Vision Computing journal.
"Technology for Sensitive Artificial Listener Avatars"
We have developed a group of four so-called Sensitive Artificial Listeners (SALs). These talking head avatars were created to be able to hold a sustained conversation without any real understanding of language. Instead, they were developed to react on a user's non-verbal communicative signals, and use affective signals themselves in response. The four SAL characters are emotionally stereo-typed: Poppy is cheerful, Spike is angry, Obadiah depressed, and Prudence is plain and rational. The characters use affectively charged sentences and non-verbal signals to try and move the user into their emotional quadrant. The avatars were created within a European project called SEMAINE, and are freely available online. Within the project the Imperial team was responsible for the Avatars' sight. In this talk I will show how we developed head action detection, facial expression detection, and emotion detection. I will also give a live demonstration of an interaction with the SAL characters.
Ian is a developer at Turbulenz, where he is building and supporting the Turbulenz Engine Technology. Prior to joining the Turbulenz team, he worked for Philips Research and amBX developing a lighting technology for games and working with game studios worldwide. He holds a Masters of Engineering in Computing from Imperial College, London. When he graduated, Ian chose the games industry instead of finance and tries hard to channel his enthusiasm for new technology into software projects in his spare time. When he's not moving bytes around, he lets off steam, mountain biking and playing Ultimate Frisbee.
"Turbulenz Engine: A New Approach to 3D Games in the Browser"
Marc Hull is an Imperial College alumnus who has been working as a games programmer at Frontier Developments for the past 3 years. Since joining the company back in 2007, his work has spanned engine tools, procedural geometry, inverse kinematics and gameplay development across a variety of projects.
This talk looks at some of the technology and tools that were written during the development of Frontier's latest game. It covers various aspects of our data-driven engine, which allowed us to add new features and iterate on our design without needing to modify and recompile the core C++ code. This includes an overview of our component model for changing the look and feel of objects within the game world, our behaviour language for concisely representing interactions between characters and the player, and our animation system which allows us to maintain realistic character movement throughout the game.