Fyi. Sorry for crossposting.

P.S.: Today I deleted the "kamailio" clone from my github account http://github.com/christoph-v, because that account is now solely intended for private use (SP-ARK, PS-ARK).
I created another account for professional purposes: http://github.com/christoph-v-kapsch
Gesendet: Freitag, 08. März 2019 um 16:

Von meinem Samsung Galaxy Smartphone gesendet.

-------- Ursprüngliche Nachricht --------
Von: Christoph Valentin <christoph.valentin@gmx.at>
Datum: 09.03.19 01:33 (GMT+01:00)
An: Valentin Christoph <Christoph.Valentin@kapsch.net>
Betreff: Fw: Re: [x3d-public] x3d-public Digest, Vol 120, Issue 34

F.y.i......
--
Diese Nachricht wurde von meinem Android Mobiltelefon mit GMX Mail gesendet.
Am 08.03.19, 22:49, Christoph Valentin <christoph.valentin@gmx.at> schrieb:
Hi Andreas,
 
Still some neurons firing, so I send another update. Please feel free to break the "event cascade", when it starts to become boring :-)
 
Everything else inline.
 
Have a nice weekend
Christoph
 
P.S.: Today I deleted the "kamailio" clone from my github account http://github.com/christoph-v, because that account is now solely intended for private use (SP-ARK, PS-ARK).
I created another account for professional purposes: http://github.com/christoph-v-kapsch
Gesendet: Freitag, 08. März 2019 um 16:07 Uhr
Von: "Andreas Plesch" <andreasplesch@gmail.com>
An: "X3D Graphics public mailing list" <x3d-public@web3d.org>
Betreff: Re: [x3d-public] x3d-public Digest, Vol 120, Issue 34
Thanks.
 
I looked at the network sensor, and BS Collaborate nodes. I think the idea is to explicitly forward all events which need sharing to the server which then distributes those to connected clients. This leaves the definition of the shared state to the scene, and therefore requires careful design and code for MU. Perhaps there is a way that the browser can better assist with making a scene MU capable.
 
What is the complete state a client needs when it is admitted to a shared scene ?
 
Since most fields of most nodes accept input, and can therefore potentially change, complete state probably means just the value of all fields of all root nodes which means all nodes. The complete state may only need to be transferred when a client joins.
 
Plus the offsets of avatars from Viewpoints. And the clock. Perhaps other values.
 
[Christoph:] I think, what you are considering here is the tradeoff between "MU is done completely by the browser" and "MU is done completely by the author".
Let's imagine the Web3D Consortium decided "MU was up to the author, everything should be done by EAI/SAI". Then every author had to decide, which nodes of his scene needed synchronization and he had free choice of the network protocol and of the server he would use. His efforts were the maximum, on the other hand his freedom were maximum, too.
The opposite extreme would be that the browser vendor did all the MU inherently within the X3D nodes. The efforts of the author to create MU scenes from SU scenes would be zero, but his freedom would be zero, too (given each browser implemented his own MU strategy).
So the Network Sensor is the ABSOLUTE MINIMUM, which MUST be standardized (INCLUDING a standard for the network protocol), in order to keep independence from the browser vendor and from the server vendor.
 
I'm with you, when you doubt the Network Sensor would be enough. I am with you, when you insist some additional nodes MUST be standardized to keep the efforts of the authors small. The Network Sensor is not enough (I think I already said that here).
 
If you are interested, I can introduce you to the SMUOS/C3P idea, which I developed throughout my hobby project. Would be a loss of ten years, if everybody had to repeat those experiences. And I guess, there are some projects and products (people) around here, who can help, too, much better than I can do.
 
 
 
 
 
 
I was thinking about avatar initiated event cascades since the state of everything else should be deterministic and only depend on time. These other things should update themselves. There are exceptions such as scripts which generate random values.
 
Avatar initiated event cascades generally start with environment sensors. Not sure if there are other ways. The idea would be to just have to transmit this event cascade to other clients which then can replay it to update their scene instances. 
[Christoph:] I think it will not be necessary to transmit event cascades, only single events and states need to be networked
 
 
 
 
 
I also was reading some of the github open-dis documentation. Interesting background, and pretty accurate discussion on coordinate systems. It may be possible to set up somewhere a very simple distributed simulation to join and test clients with.
[Christoph:] I do not want to comment on HLA/DIS here.
 
 
 
 
Single browser MU: I think an additional touchedByUser field would be required but perhaps there is a way to generalize tracking of Users.
[Christoph:] Probably. Each problem can be solved, if you want. Some solutions are beautiful, others are ugly.
 
 
Cheers,
 
-Andreas
 

Date: Thu, 7 Mar 2019 18:11:15 +0100
From: "Christoph Valentin" <christoph.valentin@gmx.at>
To: "x3d-publicweb3d.org" <x3d-public@web3d.org>
Subject: Re: [x3d-public] Multiplayer strategies

Hi Andreas,

Please find another 2c inline :-)

All the best
Christoph
?
?

Gesendet:?Donnerstag, 07. M?rz 2019 um 14:02 Uhr
Von:?"Andreas Plesch" <andreasplesch@gmail.com>
An:?"X3D Graphics public mailing list" <x3d-public@web3d.org>
Betreff:?Re: [x3d-public] Multiplayer strategies

Hi Christoph,
?
I admit that it is somewhat uncomfortable to think about single browser MU requirements but exploring this could be a fertile exercise as it seems orthogonal to sharing by multiple scene instances.
[Christoph:] Agree. Maybe I am just prejudiced, because I started with my project from the MU example on Bitmanagement's Homepage (is it still there?), where they used the "sessionId"(SFInt32) to identify the users and their avatars.
Now, when we think about single browser MU requirements, then the users should rather be identified by "sessionId + userId" (where userId would be a local identifier of a user of the local scene instance).
Next step would be to investigate environment sensors. E.g. touch sensor should not deliver "touchTime"(SFTime) but "touchedByUser"(SFInt32) instead, true?


?
What constitutes the state of a scene which needs synchronization across instances ? The main changes which need updating are the progress of time, and avatar state.?
[Christoph:] This depends on the use case. If you had a museum like world(with predefined animation), where only the avatars would interactively move, then you were right.
I prefer to dream of arbitrary animated and/or simulated interactive worlds/universes with moving models (cars, trains, houses, doors,......), where anything could be an avatar (even a locomotive could be an avatar)


?
Perhaps it suffices to synchronize avatar generated event cascades and replay those in the instances ? Would that require an event queue for recording ?
[Christoph:] What do you mean by "avatar generated event cascade"? I do not understand this term.
When you talk about recording, do you mean LI requirements? I think LI will be a topic, in particular with respect to avatar position, and in particular when it comes to mixed reality.

?
Another way to think about state synchronization is to have a very complete description of state and then only with deltas for synchronization and conflict resolution across instances.
[Christoph:] I think the complete state must be stored persistently on a server, because anytime a new scene instance can join the session and must receive the complete state for initialization (do you know, how the Network Sensor of Bitmanagement works? I recommend to check it out).

?
another 2c :)
?
Andreas
?
?
?
?
?
-Andreas
?
?
Date: Thu, 28 Feb 2019 16:40:24 +0000
From: Valentin Christoph <Christoph.Valentin@kapsch.net[mailto:Christoph.Valentin@kapsch.net]>
To: Andreas Plesch <andreasplesch@gmail.com[mailto:andreasplesch@gmail.com]>
Cc: X3D Graphics public mailing list <x3d-public@web3d.org[mailto:x3d-public@web3d.org]>
Subject: Re: [x3d-public] Multiplayer strategies
Message-ID:
? ? ? ? <VI1PR03MB47038C7653CE09362D433D3EE4750@VI1PR03MB4703.eurprd03.prod.outlook.com[mailto:VI1PR03MB47038C7653CE09362D433D3EE4750@VI1PR03MB4703.eurprd03.prod.outlook.com]>

Content-Type: text/plain; charset="utf-8"

Hi Andreas,

Maybe another 2 cents from my side.

If we want to keep it simple, we should keep a 1:1 relationship between user and scene graph (I call this the ?personal scene instance? PSI).

Why? Two reasons.

(1) Maybe the scene consists of many ?modules?, which might span a large section of the Virtual Universe, an which are loaded and unloaded into each scene instance on demand.

One user is ? for some time ? only interested in module A ? so other modules need not be loaded in ?his? scene instance ? saving memory and CPU resources.

Other user is interested in several modules at the same time --> he will need higher performance in his scene instance

So the matter of scalability will be easier to handle, if we keep user : PSI = 1 : 1.

(2) The scene might provide different ?views? to different ?users?. One user might get a photo realistic 3D graphic, another user might receive a topographic illustration of the scene with only symbolic content. Only the ?shared state? is the same for all scene instances of a multiuser session

As I said, just my two cent.

All the best.

From: x3d-public <x3d-public-bounces@web3d.org[mailto:x3d-public-bounces@web3d.org]> On Behalf Of Andreas Plesch
Sent: Thursday, February 28, 2019 4:25 PM
To: X3D Graphics public mailing list <x3d-public@web3d.org[mailto:x3d-public@web3d.org]>
Subject: Re: [x3d-public] Multiplayer strategies

Thanks for all the thoughtful response. Various ideas were offered. The DIS component is dedicated to communication and synchronization between browsers in a peer to peer fashion but has its own limitations. Outside of X3D various web technologies such as webRTC, websocket or socket.io[http://socket.io]<http://socket.io[http://socket.io]> exist which can be used with ad hoc protocols and SAI or DOM based scene updating. I think Firebase is designed to push realtime updates of a json store to all connected clients, and could fit well. Synchronization of multiple avatars and persistent avatar registration on a dedicated service was suggested.

It is a wide field. To narrow the domain, let's perhaps consider the local, single game console/browser with multiple controllers and split screen/multiple headsets case, say up to 4 actors, no servers.
Using a projector or a large TV, there is natural sharing which is eliminated with HMDs. Replicating the screen, one mode is just mirroring of a master render to other HMDs but this is very unpleasant in VR. Another mode is one actor with sensing, and other viewers, passive but still moving and looking. Another mode is full access to the scene for all locally connected.

WebXR allows for multiple HMDs and controllers. I am not sure if web browsers can deal with multiple mice/keyboards but I suspect they can; there is a gamepad API.

Brainstorming a multiple avatar, client only design:

- a list of render surfaces, for each avatar, perhaps layout, layer related
- a list of active avatars, linked to a render surface
- a way to add and remove an avatar
- an active viewpoint per avatar
- touchsensor, other sensors linked to a list of avatars

Very fuzzy but perhaps a start for thinking about such a case.

I was looking around castle engine for inspiration from a gaming perspective but could not find much.

It may be that dealing with multiple avatars in a single browser is actually more complicated than a local one browser per avatar, plus a synchronized scene from a scene server design which has to deal with updates to the shared scene, the synchronization, and distribution.

Was there a VRML approach to shared experiences ?

>From a practical standpoint, simply mirroring from a master HMD to a second connected HMD using webVR in x3dom would be a first step to explore.
And perhaps exploring Firebase, eg. if it can store a json X3D scene, and how multiple X3DOM or X-ITE instances would receive it, perhaps in an inline.

-Andreas



On Tue, Feb 26, 2019, 5:33 AM Andreas Plesch <andreasplesch@gmail.com[mailto:andreasplesch@gmail.com]<mailto:andreasplesch@gmail.com[mailto:andreasplesch@gmail.com]> wrote:
With VR it may become more common to share a live, dynamic experience using multiple headsets and controllers. At first glance this seems to call for multiple, active viewpoints rendered by a single browser. The layering and layout components seem relevant.

Another strategy would be having multiple browsers with identical scenes and keeping scenes in sync with an additional process and SAI methods.

What are the strategies offered by X3D to support sharing a live, dynamic world ?

This came up as a x3dom GitHub issue and I thought may be more generally interesting.

Andreas



The information contained in this e-mail message is privileged and confidential and is for the exclusive use of the addressee. The person who receives this message and who is not the addressee, one of his employees or an agent entitled to hand it over to the addressee, is informed that he may not use, disclose or reproduce the contents thereof, and is kindly asked to notify the sender and delete the e-mail immediately.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://web3d.org/pipermail/x3d-public_web3d.org/attachments/20190228/42e8e0ee/attachment.html[http://web3d.org/pipermail/x3d-public_web3d.org/attachments/20190228/42e8e0ee/attachment.html]>

------------------------------

Subject: Digest Footer

_______________________________________________
x3d-public mailing list
x3d-public@web3d.org[mailto:x3d-public@web3d.org]
http://web3d.org/mailman/listinfo/x3d-public_web3d.org


------------------------------

End of x3d-public Digest, Vol 119, Issue 86
*******************************************_______________________________________________ x3d-public mailing list x3d-public@web3d.org http://web3d.org/mailman/listinfo/x3d-public_web3d.org[http://web3d.org/mailman/listinfo/x3d-public_web3d.org]



------------------------------

Message: 2
Date: Thu, 7 Mar 2019 18:29:24 +0000
From: "Brutzman, Donald (Don) (CIV)" <brutzman@nps.edu>
To: "semantics@web3d.org" <semantics@web3d.org>, "X3D Graphics public
        mailing list" <x3d-public@web3d.org>
Subject: [x3d-public] X3D Semantic Web Working Group 7 MAR 2019:
        references, geometric properties, MPEG-7 Descriptors
Message-ID: <6aa55efc-497c-1813-4bdb-c5843ae2486c@nps.edu>
Content-Type: text/plain; charset="utf-8"

8.0. Eighth meeting of the Semantic Web Working Group

Attendees Jakub Flotynski, Athanasios Malamos, Anita Havele, Don Brutzman.

        Web3D Teleconference Information
        http://www.web3d.org/member/teleconference-information

Prior minutes, Jakub and Athanasios:
        [x3d-public] X3D Semantic Web Working Group minutes, 17 JAN 2019: structural and conceptual semantics
        http://web3d.org/pipermail/x3d-public_web3d.org/2019-January/009898.html

All information in these minutes is approved for pubic release.

=================================================================

7.0 Last week's meeting included Thanos Jakub Nicholas Anita and Don.

Essentially we reviewed website links and kept discussing/improving slides.

Unfortunately my minutes got mistakenly deleted.  Sorry about that.

=================================================================

8.1 *Working group information*

The X3D Semantic Web Working Group is a Web3D Consortium member-only group that does most of its business openly on the x3d-public mailing list.

        X3D Semantic Web Working Group Charter
        http://www.web3d.org/working-groups/x3d-semantic-web/charter

        X3D Semantic Web Working Group
        http://www.web3d.org/working-groups/x3d-semantic-web

"The X3D Semantic Web Working Group mission is to publish models to the Web using X3D in order to best gain Web interoperability and enable intelligent 3D applications, feature-based 3D model querying, and reasoning over 3D scenes."

        semantics@web3D.org
        http://web3d.org/mailman/listinfo/semantics_web3d.org

=================================================================

8.2 *Working group assets*

We have started work on the following website pages to record resources.  Many are now present, more will follow.

        X3D Semantic Web Public Assets
        http://www.web3d.org/x3d-semantic-web-public-assets

        X3D Semantic Web Member Assets
        http://www.web3d.org/member/wiki/x3d-semantic-web-member-assets

Working group co-chairs have permission to edit the member-assets page, then results are reviewed and copied over to public-assets page.

Inputs welcome to keep building and structuring these important lists of assets.

=================================================================

8.3 *Workshop opportunities*

We think that our current activity can likely be a contribution at the upcoming

        First Eurographics-EuroVR Workshop on Semantic 3D Content
        6 May 2019 in Genova Italy as part of EuroGraphics 2019
        http://semantic3d.org/workshop

Paper/poster/demonstration submission deadline extended: March 10, 2019

We discussed what a good follow-on might be for Web3D 2019 Conference.  Perhaps another workshop, or simply a meeting, on X3D Semantic Web Working Group.  This could build on the EuroGraphics 2019 momentum, disseminate progress among participants and set us up for much expected work to emerge in the coming year.

        WEB3D 2019: 24th International ACM Conference on 3D Web Technology
        26-28 July 2019, Colocated with SIGGRAPH2019, Los Angeles California USA
        http://www.web3d.org/event/web3d-conference-2019
        http://web3d2019.web3d.org

The current work will not be sufficiently mature to be a paper submission by the deadline (~10 days away).

Instead we plan to submit a Web3D workshop proposal for this new work. This can build upon the Eurographics-EuroVR Workshop products.

Possibly a poster is also appropriate, especially if it describes coherent existing work.

Web3D workshop deadline submission: 1 April 2019.  This will be next week's topic.

=================================================================

8.4 Primary topic: *slideset Semantic X3D - thoughts and ideas*

Much detailed architectural work is in progress, distilled in these slides.

https://docs.google.com/presentation/d/1fCMu0V-zRAfJqFId7QIMyLh2EOCr5Qgl63MqqyFk6JM/edit?ts=5c5c6903

_Geometric properties_ (slides 9-10) updated:

Thanos said ?sitting in my corner? ... Can we define a property for ?corner? based on geometric relationships?  Good to think about.

We might suppose multiple candidate geometric properties:
* Primitive shapes: Rectangular, Conical, Cylindrical, Spherical, Ellipsoid, Point, Line, Mesh
* Side, TopSide BottomSide LeftSide RightSide FrontSide BackSide
* ParametricSurface, NURBS, BREP, other types?
* Characteristics: Irregular, Open, Closed (Watertight), Corner, Seam, Wall
* Angular relationships: Perpendicular, Acute, Obtuse

------
Thanos: corner rdfs:subClassOf
     str:includes(2) str:triangles;
If normal(A) CROSS-PRODUCT normal(B) >0 -> createCorner(C) and includes(C,A) and includes(C,B)
------

TODO: does there already exist a set of 3D property classes related to shape of models?
TODO: should we next compare existing 3D ontologies of interest?
TODO: build examples that help us determine the best, most reusable elements of an X3D ontology?

We discussed relationships and 3D functions for extraction of semantic information from geometric shapes so we could specify possible goals of our works:
* Extraction of semantic information from X3D models (re-visit MPEG7)
* Generating X3D models on the basis of semantic 3D models (conceptual)
* Likely an iterative process, OWL inference can generate more RDF properties
* Semantic annotation (description) of X3D (without representation)

=================================================================

8.5 MPEG-7 overview

We briefly discussed prior work with MPEG-7.  Background:

        MPEG-7 - Wikipedia
        https://en.wikipedia.org/wiki/MPEG-7

"MPEG-7 is a multimedia content description standard. It was standardized in ISO/IEC 15938 (Multimedia content description interface). This description will be associated with the content itself, to allow fast and efficient searching for material that is of interest to the user. MPEG-7 is formally called Multimedia Content Description Interface. Thus, it is not a standard which deals with the actual encoding of moving pictures and audio, like MPEG-1, MPEG-2 and MPEG-4. It uses XML to store metadata, and can be attached to timecode in order to tag particular events, or synchronise lyrics to a song, for example.

It was designed to standardize:
* a set of Description Schemes ("DS") and Descriptors ("D")
* a language to specify these schemes, called the Description Definition Language ("DDL")
* a scheme for coding the description

The combination of MPEG-4 and MPEG-7 has been sometimes referred to as MPEG-47."

TODO Thanos will look up whether licensing or patents have been declared.

        MPEG Licensing Authority
        https://www.mpegla.com

=================================================================

8.6 MPEG-7 Visual Descriptors: More than MPEG-7

These are draft slides that Thanos has prepared for our discussions.  They will be exposed publicly once further developed.

https://docs.google.com/presentation/d/11VSFHriBnOXJzsHfYX0XDUMVqO5X9HePUvQB_1mxRRY/edit#slide=id.p

Back to shared slideset, follow-on slide 11:

--------------------------------------
Inclusion of Visual Descriptors in X3D

Visual Descriptors are available for color and shape.  They seem quite analogous to structure provided by RDF properties.

It is an interesting question whether the X3D Specifications are ?ready? for inclusion of visual descriptors.  Visual-descriptor properties are primarily metadata about a scene, not directions for rendering.  Indeed our current effort is to create such a conceptually coherent ontology for X3D.

Thus if we define how to include visual descriptor properties in a scene,
* Authors could include Metadata nodes with RDF properties,
* Tools could perform geometric inference and similarly add Metadata nodes

Attaching semantic information to X3D scenes

This working group needs to identify X3D Ontology mappings as
* embedded MetadataSet structures
* embedded (multi-namespace?) and external RDF files
* norms and best practices for including such descriptor files

======================================

8.7 *W3C Ontology for Media Resources*

Really important reference that Jakub identified:

Ontology for Media Resources 1.0
W3C Recommendation 09 February 2012
https://www.w3.org/TR/mediaont-10/

This ontology has an amazing number of metadata correlations.  Further, aligning an X3D Ontology with this approach would immediately give us a broad number of terms and mappings, all compatible with HTML5.  Excerpt from Table of Contents:

         5.2.2 Multimedia metadata formats mapping tables
             5.2.2.1 CableLabs 1.1
             5.2.2.2 DIG35
             5.2.2.3 Dublin Core
             5.2.2.4 EBUCore
             5.2.2.5 EXIF 2.2
             5.2.2.6 ID3
             5.2.2.7 IPTC
             5.2.2.8 LOM 2.1
             5.2.2.9 Media RSS
             5.2.2.10 MPEG-7
             5.2.2.11 OGG
             5.2.2.12 QuickTime
             5.2.2.13 DMS-1
             5.2.2.14 TTML
             5.2.2.15 TV-Anytime
             5.2.2.16 TXFeed
             5.2.2.17 XMP
             5.2.2.18 YouTube
         5.2.3 Multimedia container formats mapping tables
             5.2.3.1 3GP
             5.2.3.2 Flash
                 5.2.3.2.1 FLV
                 5.2.3.2.2 F4V
             5.2.3.3 QuickTime
             5.2.3.4 MP4
             5.2.3.5 OGG
             5.2.3.6 WebM

We definitely need to review and study this work further.

Wondering if we can invite an expert in that group to give a presentation at Web3D 2019?!

=================================================================

8.8 * Planning Ahead *

We plan to meet 14 MAR and 21 MAR.  No meeting 28 MAR.

Of interest:

        Prot?g? Short Course: MARCH 27 - 29, 2019 at STANFORD, CA
        https://protege.stanford.edu/short-courses.php

=================================================================

Steady interesting progress, thanks colleagues!  8)  8)  8)

all the best, Don
--
Don Brutzman  Naval Postgraduate School, Code USW/Br       brutzman@nps.edu
Watkins 270,  MOVES Institute, Monterey CA 93943-5000 USA   +1.831.656.2149
X3D graphics, virtual worlds, navy robotics http://faculty.nps.edu/brutzman

------------------------------

Subject: Digest Footer

_______________________________________________
x3d-public mailing list
x3d-public@web3d.org
http://web3d.org/mailman/listinfo/x3d-public_web3d.org


------------------------------

End of x3d-public Digest, Vol 120, Issue 34
*******************************************
_______________________________________________ x3d-public mailing list x3d-public@web3d.org http://web3d.org/mailman/listinfo/x3d-public_web3d.org
_______________________________________________ x3d-public mailing list x3d-public@web3d.org http://web3d.org/mailman/listinfo/x3d-public_web3d.org