Wednesday, June 30, 2010

Video - Videoconferencing and realtime collaboration in Second Life

This is a simple example of videoconferencing and realtime collaboration in Second Life, which extends and details my previous post Video - Simple videoconferencing demo in Second Life, posted last week. Audio in Italian.

This demo has been done using the Shared Media feature of Viewer 2.x and integration with external services. Three participants have a videoconference in Second Life using live video streams from respectively Ustream, Livestream and, and use Google Apps for collaborative work: they watch together a PPT -like presentation available on the web, and edit together a document using the new realtime co-editing feature in Google docs.

The demo shows that some essential videoconferencing, realtime collaboration and co-editing applications features of business and e-learning oriented platforms like Teleplace could, with some tweaking of the platform, be implemented also in Second Life.

Friday, June 25, 2010

Video - Simple videoconferencing demo in Second Life

Simple videoconferencing demo in Second Life (audio in Spanish)


We were only two participants in this demo, but this should work with up to 10 users. All three major free personal live streaming services can be used. The user on the left is broadcasting video via Ustream. The user on the right is broadcasting video via, and the middle screen is prepared for Livestream.

The best way to do this is broadcasting video only, and using native Second Life voice to talk. The sync lag between video and audio is 3-4 sec with Ustream and, 2-3 sec with Livestream, but for applications like seminars etc. it does not matter that much. Using the higher quality options available with all three major free personal live streaming services, the video quality is good enough for videoconferencing in Second Life.

Friday, June 18, 2010

ASIM2010-1 First Online Workshop on Advancing Substrate Independent Minds

The First Online Workshop on Advancing Substrate Independent Minds, ASIM2010-1, was held in Teleplace on June 5, 2010. It was a very intense workshop with 10 talks and lively discussions. See the carboncopies website for a background. All talks and discussions were recorded on video and may be available upon qualified request (see the contact info on the carboncopies website). The short video summary below is public.

Description of the workshop and summary video: A brief summary of excerpts from the June 5, 2010 online workshop “Advancing Substrate Independent Minds (ASIM-2010-1)”. ASIM is driven by a community with an objective-oriented and action-oriented approach that is aimed at the development of technological means by which to achieve the transfer of cognitive processes from a human brain to an artificial substrate.

A convergence of nanotechnology, biotechnology, brain imaging, etc. promises to accelerate cutting edge developments towards processes such as whole brain emulation, mind transfer, digital personalities, gradual neuroprosthetic replacement and brain preservation.

The first public ASIM workshop will be held on August 16-17, 2010, as a satellite meeting to the Singularity Summit 2010 in San Francisco. For more information, please visit

Comments: this was a very interesting workshop and I look forward to participating in the forthcoming workshops, online and in physical space. I hope I will attend the Singularity Summit 2010 in San Francisco and the satellite ASIM public workshop (I am not yet sure I will be able to go). After listening carefully to all the talks and asking many questions to experts, I am persuaded that:

Mind Uploading is feasible in principle. This is the only position compatible with materialism, the scientific method, and current scientific knowledge.

Achieving MU may take longer than we wish and require reformulations of current notions of self. Having worked so many years in research and engineering management I know only too well that achieving an ambitious objective very often takes more time than expected, often takes much more time than expected, and always takes more money than expected. So despite many very promising ongoing advances I remain very skeptic on the timeline. I don't think even the first MU research demonstrators will be achieved by 2050 (I am happy to see that others are more optimist, and I will be VERY happy to be proven wrong).

But, as I said, Mind Uploading is feasible in principle and it will be achieved someday.

Achieving MU will probably require a combination of all methods proposed so far, and then some.

Mind Uploading via a combination of:
- Brain preservation optimized for future scanning
- DNA or softcopy genome storage
- Bainbridge-Rothblatt personality capture (see also the Lifenaut Project recently featured in New Scientist)
may available to those of my generation. Though our natural remaining lifespan is not likely to be long enough for us to benefit of uploading technology, a combination of these methods may transport (a sufficiently detailed instance of) us to a future where Mind Uploading is an operational reality.

New Teleplace release 3.5, with enhancements and new features

Teleplace has just released a new major upgrade, version 3.5.

The full 3.5 release notes are here:

There are many improvements in both functions and implementation. The main new functions are:
- Echo cancellation and many VoIP improvements. Of course clear and easy to use VoIP is one of the main requirements for successful telepresence meetings.
- A secure room video broadcast feature (beta), useful for very large events like seminars and conferences. This feature permits distributing large crowds in different halls and delivering the same video and audio stream to all.
- Integration with Microsoft SharePoint.
- And many other new features and performance tuning fixes.

With this new release, Teleplace consolidates its leadership in the online collaborative 3D telepresence and videoconferencing space.

Wednesday, June 16, 2010

YES! Hard-core transhumanist splinter groups yearning for cyber-heaven

The anti-transhumanist New Atlantis blog Futurisms has a story on Why Transhumanism Won’t Work. See also the review at Accelerating Future.

The article is mainly an anti-uploading rant. Their technical objections to uploading are, needless to say, very stupid. But they understand the concept of uploading well;

"uploading is the proposition that, by means of some future technology, it may be possible to “transfer” or “migrate” a mind from its brain into some new “embodiment” (in the same way one “migrates” a computer file or application from one machine to another). That may mean transferring the mind into a new cloned human body and brain, or into some other computational “substrate,” such as a future supercomputer with the horsepower to emulate a human brain."

My own position is: Mind Uploading is feasible in principle. This is the only position compatible with materialism, the scientific method, and current scientific knowledge. Denying this is falling into vitalism and mysticism. Our bodies and brains ARE machines which operate according to the laws of physics, machines which can be fully understood by science and improved by engineering. We ARE information, and information CAN be transferred from one computational substrate to another. Perhaps achieving uploading may take longer than some transhumanists thought in the 90s, and perhaps the deployment of uploading technology will force us to re-think our intuitive concept of self. But this does not change the fact that uploading is feasible in principle, and desirable. Some day it will be achieved (and there are VERY promising research projects ongoing), and every human will have the option of leaving biology behind and moving to a post-biological life with indefinite lifespan. This will probably not happen in the first half of the century, but for my generation there are new emerging options for brain preservation.

They also understand well that uploading is a central transhumanist meme, perhaps THE central one:

"transhumanism itself is uploading writ large. Not only is the idea of uploading one of the central dogmas of transhumanism..."

And they understand, better than many transhumanists, the current situation of the transhumanist movement:

"The further mainstreaming of transhumanism seems to require some P.R. maneuvering, including a rebranding (the glossy new name “H+”). It may also require a moderating of ambitions. The old “Extropian” dreams of uploading and wholesale replacement of humanity with technology may be too scary and weird for mass audiences. Perhaps more modest ambitions will have a broader public appeal: life extension and performance enhancement, cool new gadgets and drugs, and only minimal forms of cyborgization (implanting technological devices within the body). In other words, more Aubrey de Grey, less Hans Moravec; more public policy and less cyberpunk; more hipster geeks and fewer socially-impaired nerds. A kinder, gentler Singularity. Maybe even one with women in it... "

Here they are describing the moderate, watered down, lukewarm, cautious, timid, politically correct, and BORING version of transhumanism that some ex-transhumanists turned PC anti-transhumanists wish to promote. No Extropian dreams of uploading to cyber-heaven, but free anti-oxydant and viagra for senior citizens.

But, of course:

"If so, distancing themselves from uploading is probably a smart move for the H+ leaders, but it risks a split with their base, and the formation of new, hard-core splinter groups still yearning for cyber-heaven..."

YES! Let's form hard-core transhumanist splinter groups yearning for cyber-heaven. Let's put some vision, imagination and FUN back into transhumanism. Let's re-affirm the bold, fresh, uncompromising and energizing transhumanism of Hans Moravec and Max More. Let's not appease critics and PC idiots, but ignore them. Not kissing ass, but kicking ass.

Tuesday, June 15, 2010

Second Life refugees in Blue Mars

I have been in Blue Mars since the very early beta, but I have never been very active and only visited it once a month or so. I guess I am just waiting to see how the platform develops.

The new version has voice chat and an option to have the camera follow the avatar, similar to the standard Second Life camera. Voice chat works well, in the picture I am talking to two Second Life refugees, one very knowledgeable about technical issues, the other with a very nice dress she made herself (yes, you can create in Blue Mars like in Second Life, it is just less immediate and requires different 3D design tools).

I am still following the development of Blue Mars with interest. With many new regions (cities), voice and Flash content, it is becoming more and more suitable for the applications I am interested in. There are always users in the welcome area, and the discussions are becoming similar to those in SL. I wonder whether Blue Mars can capture large numbers of disenchanted Second Life users.

Saturday, June 12, 2010

H+ Summit on Livestream

I am watching the H+ Summit talks via Livestream.

I just listened to two great talks by John Smart and Ken Hayworth. Yes, Mind Uploading may be closer than many think, and it may eventually be available to my generation (and I am probably older than you). Visit the Brain Preservation Foundation now.

Ken's "uploading may be only 15 years in the future" is SO refreshing compared to the cautious and boring attitude of today's moderate transhumanists, repented ex-transhumanists and anti-transhumanists in disguise. I feel back in the 90s, let's hope it lasts.

Summary of the two talks, from the IEET blog:

John Smart: the Brain Preservation Foundation

He’s talking about the brain preservation prize. 100,000 unique humans die every day.

Medicine has many frontiers today. Although we haven’t made a bunch of progress in preventing biological death, computer scientists have made a lot of progress in their fields.

Anatomists can preserve whole human bodies for later viewing with a lot of detail.

Brain Preservation Prize is like the X prize. They have an anonymous donor who will pay $100k to the first winner.

Cryonics is currently available, but the methods used in cryonics aren’t currently good enough to win our prize.

Vitrification is a possible winner, but plastination is the most promising. It’ll be simple, dependable, and potentially variable.

What would plastination look like? Perfusing the brain with a chemical that fixes the cellular proteins, and hours later perfusing a dangerous and toxic chemical to fix the lipids, and then another chemical.

What are the motivations for this? The human conncetome, biomimicry, and bio-inspired machines. This could eventually lead to humans capable of empathy and morality.

Some people could preserve their brains for the advancement of life.

In the future, we should be able to extract whole memories and experiences from a static brain.

And of course, some people will preserve their brains so that their minds could be uploaded or transferred to enhanced/robotic bodies in the future.

And some who are deeply uncertain about the future may choose to preserve their brains in a kind of Pascal’s Wager.

What can you do? One step a lot of people forget is to be happy. Science and technology are magical, and we’re incredibly lucky to be alive right here, right now.

 Ken Hayworth: Can we extract a mind from a plastic-embedded brain?

He show some increasing thin sections of brains. Eventually, we get enough information to reconstruct the precise structure of each neuronal structure. Not only that, but things like amino structures of receptor proteins are preserved.

They’re multiple techniques that can do this kind of things. One of them he’s talking about could be made 100% accurate.

Using a heated-knife subdivsion, we could image an entire brain at the synapse level with that kind of reliability, but we have a long way to go to scale it up that much.

They’re incredibly detailed computational theories of the human mind, he recommends the books Unified Theories of Cognition by Allen Newelll, and How Can the Human Mind Occur in the Physical Universe? by John R. Anderson. He also recommends two books on consciousness.

He says as a transhumanist he can put 2 and 2 together, and see how this could relatively quickly to uploading.

Whose mind could be extracted? Your mind. The only thing that’s stopping that is the lack of reliable brain preservation.

Sign the petition to make this a reality.

Thursday, June 10, 2010

Second Life: New Directions?

In my early morning TechCrunch scan I learned that Linden Lab Lays Off 30 Percent Of Staff. This is already on the main Second Life blogs:

Gwyneth Llewelyn: [Reset] and do a 180ยบ turn
New World Notes: Looking into Linden Lab Layoff Rumors -- UPDATE: Confirmed, 30% Lindens Being Laid Off, Restructured Team to Develop Web-Based Second Life Viewer -- UPDATE 2: My Take on What This Means for SL's Future -- UPDATE 3: Babbage & Other Beloved Lindens Let Go
Official SL blog: A Restructuring For Linden Lab

Of course I am saddened by seeing so many brilliant and creative people go, but I am sure they will soon find other interesting and rewarding things to do.

Concerning the implications for the future of Second Life, I see two important developments:

1) The move reflects a new strategy for Linden Lab, says the company, which aims make its virtual world more browser based, eliminating the need to download any software.

Most articles agree that the SL Web Viewer could be based on Unity3D, which, in my opinion, would be a great choice. See for example the Unity3D based Moondus technology developed by my good friend Bruno Cerboni and his team. Gwyn says "The average user wants to spend 3 minutes registering for an account and expects everything to be immediately obvious after they log in", and I couldn't agree more. Second Life needs to become much easier to use in order to attract and retain casual users, and a web client will at least simplify the installation (The Unity3D plugin auto-installs, and Unity3D support could become a native feature of at least Chrome). Of course running in a browser does not by itself make using SL easier. I think the 2.0 viewer is easier for casual visitors, and I see it as a first step toward the development of what I like to call onion interfaces: a first interface layer designed in such a way as to be extremely easy to use for newcomers, and more layers with more advanced options hidden inside.

2) Linden Lab... let the enterprise group go, which creates a customized version of the virtual world that sits behind a firewall.

To this Gwin adds that Second Life is "focusing again on the consumer market", which seems a solid and pragmatic choice. I had hoped the launch of SL Enterprise a few months ago would start a renaissance of SL as a business collaboration and e-learning platform, but feared that it was too-little-too-late. Too little because other platforms like Teleplace address corporate and professional needs much better (See my article on Telepresence Education for a Smarter World for more thoughts on business and e-learning applications of VR worlds), and too late because, as Gwyn says (disagreeing), "the bad image of SL as a sex-only VW is too strong... The harm is done and LL cannot fix it."

So I think re-focusing SL on the consumer market, and acknowledging that it will never appeal to the more conservative "dinosaurs" in the corporate and educational sectors, make a lot of sense. But this does not mean the smarter professionals, corporations and universities should abandon SL: on the contrary, SL will remain a wonderful creative lab for the more dynamic and creative adventurers, and an enabling platform for many real advances to be made.

It is an important trend that the smarter organizations use IT platform and tools developed for consumers, because they just work better than legacy corporate IT systems. Google Apps is the best example: the integrated business versions of Gmail, Google Docs, Voice, Video and other Google Apps in-the-cloud, initially developed for the consumer market, are so much better and cheaper than equivalent "professional" tools that I expect an exponential adoption, first by the leaner and smarter entities, and then by all the others. And once all collaborative Google Apps integrate multiuser videoconferencing, which seems to be appearing on the horizon, the Google cloud will probably become the standard online collaboration tool used by organizations of any nature and size.

In my post Simple videoconferencing in Second Life I have suggested some ways to integrate Google Apps as a standard videoconferencing and collaboration suite for closed groups in SL. Also, a Web based Unity3D viewer would permit integrating SL as VR part of an Intranet running on Google Apps. I definitely recommend these relatively simple developments which, I believe, would make the consumer-oriented SL much more suitable for professional collaboration as well.

Wednesday, June 2, 2010

Simple videoconferencing in Second Life

A simple videoconferencing experiment in Second Life, using Gmail video chat.

In my previous post on Telepresence Education for a Smarter World I wrote about the importance of videoconferencing and how it can add to immersion in VR worlds. So I made this simple experiment using Gmail video chat.

The result is surprisingly good, vith very clear video and voice. Note that Google video conferencing may soon become multiuser (more than 2) and integrated with Google Apps (Gmail, Google Docs, calendars etc.), so this could be a very good solution for videoconferencing and collaboration in Second Life with some tweaks:

1) It should be possible to detatch the video screenlet (second image), or make it full prim. In this first test, pressing the full screen button displays the video full screen on the PC, but it should be full screen on the prim. The video screenlet cannot be detatched because of pop-up blockers. I made the video in the first image full screen by tweaking the texture parameters, but this is not an elegant solution.
2) It should be possible to access a password protected website and show it to other users without asking them to login. In this first test, other users see the Gmail login screen on the prim. If we can do something about this, Google Apps could become a VERY good solution for videoconferencing and collaboration in Second Life.

I am sure 1) and 2) can be solved with some tweaking and/or programming SL side and/or Google Apps side. Ideas?