Location>code7788 >text

Megacity Unity Demo Project Learning

Popularity:360 ℃/2024-08-02 12:02:40

1. Preface

The Megacity Demo was released in Spring 2019, this blog post was written in 2024, and ECS has long since Released and released version 1.2.3. But the good news is that the core has changed very little, and most of the interfaces have just changed their call names.

This demo is a bit smaller in size compared to the previous Book of the Dead (released in 2018), and mainly demonstrates DOTS-related content. I just had some free time recently, and the project files have been lying on the hard disk with the 2019.2 release for a long time, so I filled the hole.

 

The demo has been uploaded on Baidu.com:

Link:/s/1X1gh6hQSRuB0KenlRZsOiw
Extract code: iios

 

To open, please use Unity 2019.1.0b7, where the Unity Package part of the package will be downloaded from the Unity server, the version is too old.

There is no guarantee that it will be pulled correctly and can be fixed on its own.

 

ECS section

Let's start with a few features that use Hybrid ECS.

2.1 HLOD

After opening the main scene, in any Section SubScene, you can see that some models have HLOD components applied to them.

HOLD refers toWhen the objects in the scene reach the last level of LOD, the last level of LOD of these objects will be merged into one mesh for display.For example, three or four houses in the distance, utility poles

etc. The merge batch will be replaced with the individual models of the merged mesh, and the model merge operation can beDo it offline, generate it well in advance

The disadvantage of HOLD is that more HLOD models need to be placed in memory and there are negative optimizations, depending on the project.

Offline creation of HLOD models can be scripted in MegaCity Demo.

The HLOD script is part of the function encapsulated within the Hybrid ECS, through the ECS calculates the HLOD display replacement and other logical processing, the use of the LOD Group components need to ensure that the number of LOD

Just be consistent with the LodParentTransforms in the HLOD, e.g. there are 2 Low LOD GameObjects in the image below, which are actually 2 levels of HLOD:

(Theoretically a single HLOD Mesh replacement, but in practice Unity supports multiple levels of HLOD)

2.2 SubScene

SubScene is Unity's sub-scene nesting feature via DOTS, and its core blogger believes it is Unity's open streaming scene loading interface:

m_Streams[i].Operation = new AsyncLoadSceneOperation(entitiesBinaryPath, , , resourcesPath, entityManager);
m_Streams[i].SceneEntity = entity;

SubScene also comes with the ability to convert the scene's content into a binary format suitable for streaming.

Some common concepts of

Before I start looking at MegaCity, I feel like I have to write up some pre-conceptions of ECS.

3.1 Screening mechanisms

A Manager class is routinely written to manage the objects of the class through a Register/Unregister mechanism.

And such logic in ECS becomes a filtering mechanism, as exemplified by MegaCity's BoxTriggerSystem, a collision management system that similarly handles the triggering of OnTriggerEnter events.

The registration of the collision box is performed through the Mono transformation component of HybridECS:

The filtering code in System of ECS is as follows:

m_BBGroup = GetComponentGroup(
    new EntityArchetypeQuery
    {
        All = new ComponentType[] { typeof(BoundingBox) },
        None = new ComponentType[] { typeof(TriggerCondition) },
        Any = <ComponentType>(),
    });

ComponentData containing BoundingBox will be filtered to the corresponding System for processing.

Instead, the traditional Manager's Unregister operation removes this ComponentData in the ECS so that it won't be filtered in the next frame.

3.2 CommandBuffer Handling in Jobs

Still using the MegaCity Demo's BoxTriggerSystem as an example, struct Job is used to handle various tasks in multiple threads and can be accelerated by Burst for the underlying code.

While in the Job can not be such as ComponentData removal such as deletion and modification operations, we can be added to the operation queue through the CommandBuffer, after the end of the Job to deal with.

This is a bit like the CommandBuffer on the rendering pipeline processing:

public struct TriggerJob : IJobChunk
{
    public  m_EntityCommandBuffer;
//...
    public void Execute(ArchetypeChunk chunk, int chunkIndex, int firstEntityIndex)
    {
        //...      
        // add trigger component
        m_EntityCommandBuffer.AddComponent(chunkIndex, newBoundingBox, new TriggerCondition());
    }
}

3.3 Marking Logic Processing

So how does a collision manager like BoxTriggerSystem mark objects that have generated collisions?

In fact, it is also handled by filtering, adding a ComponentData for the corresponding Entity entity after a collision is generated, TriggerCondition:

m_EntityCommandBuffer.AddComponent(chunkIndex, newBoundingBox, new TriggerCondition());

Just skip entities containing TriggerCondition when filtering:

m_BBGroup = GetComponentGroup(
    new EntityArchetypeQuery
    {
        All = new ComponentType[] { typeof(BoundingBox) },
        None = new ComponentType[] { typeof(TriggerCondition) },
        Any = <ComponentType>(),
    });

And in another System for music processing, it gets the entities labeled TriggerCondition and MusicTrigger again:

m_TriggerData = GetComponentGroup(
    new EntityArchetypeQuery
    {
        All = new ComponentType[] { typeof(TriggerCondition), typeof(MusicTrigger) },
        None = <ComponentType>(),
        Any = <ComponentType>()
    });

So the idea of ECS is to replace the traditional OnEnable/OnDisable message event triggering by markers.

Demo Ontology

4.1 Scene structure

Let's first look at the structure of the content statically placed in the MegaCity scene.

  1. Audio houses the audio configuration, and MegaCity utilizes DSPGraph, an ECS audio module that Unity has opened up, but at the time (referring to when the MegaCity Demo was released) the implementation was rather rudimentary, presumably to satisfy the basic usage requirements of the situation.
  2. Pathing holds information about the ship's path and is one of the points the demo wants to show.
  3. Player ship related logic is not written, this part does not use DOTS

4.2 LightPoolSystem

LightPoolSystem is mainly in the form of an ECS that traverses the lights within the current spacecraft and camera view cone for logical filtering and object pool reuse.

Because of the HDRP rendering pipeline, the lighting within the scene will interact with the volumetric fog effect to achieve a better display rendering.

 

One of the LightRef scripts is used to convert the lights in the scene into the ECS:

 

Come to LightPoolSystem's OnUpdate for a quick explanation of the logic involved:

protected override JobHandle OnUpdate(JobHandle handle)
{
    if ( == null || !)
        return handle;

    #region Setup new lights

    #region Find closest lights

    #region Assign instances

    #region Update light intensity
            

    return handle;
}

1). The first step Setup new lights, get the SharedLight without tagged LightPoolCreatedTag component data, the filter structure is as follows.

m_NewSharedLights = GetComponentGroup
(
    <SharedLight>(),
    <LightPoolCreatedTag>()
);

SharedLight is the conversion object for HybridECS in the scene, and the corresponding MonoBehavior conversion script is LightRef

Assuming that 50 lights are currently loaded in the scene, this step also creates 50 entities, but the corresponding object pools are created inertly using whichever light template is used.

This step is then tagged with a LightPoolCreatedTag at the end to prevent this part of the logic from being entered on the next Update.

 

2). The second step, Find closest lights, is to filter the view cones and distances of the mapped scene lights into another NativeArray - ClosestLights.

 

3). The third step Assign instances assigns instances, assigns specific lights to the entities that have been filtered and stores them in another NativeArray - AssignedLights for subsequent operations.

 

4). Step 4Update light intensityUpdate light intensity, directly operate the AssignedLights to update the light intensity, for the lights marked with Active as False, it will keep on dimming till

The brightness value is 0 and is recycled.

 

4.3 StreamingLogic

The encapsulation logic for streaming load scenes, since Unity SubScene does not fully encapsulate the corresponding loading and unloading logic processing.

Only the interface is provided, and we need to write an additional layer of logic.

The player component has the configuration script StreamingLogicConfigComponent hooked up to it to handle the parameters for streaming loading:

Then a small amount of logic is processed in System, and finally the Unity ECS churn loading system is notified to load it by mounting the ComponentData:

struct BuildCommandBufferJob : IJob
{
    public EntityCommandBuffer CommandBuffer;
    public NativeArray<Entity> AddRequestArray;
    public NativeArray<Entity> RemoveRequestArray;

    public void Execute()
    {
        foreach (var entity in AddRequestArray)
        {
            (entity, default(RequestSceneLoaded));
        }
        foreach (var entity in RemoveRequestArray)
        {
            <RequestSceneLoaded>(entity);
        }
    }
}

4.4 Megacity Audio System

Perhaps this system is the point, but it turns out that it's still mostly a Unity wrapper.

 

First of all, you can see the code of the system in the Package Manager, and you can also find the AudioMixer empty, which is also different from the MegaCity Demo.

All of its internal audio was developed based on this system.

 

Add ENABLE_DSPGRAPH_INTERCEPTOR to the project macro definition to turn on the debugger:

 

Once opened, you can open the debugger window at Window/DSP Graph and see the Graph structure of all the audio and how it is eventually summarized for output:

 

The Megacity demo's quick wipe between planes (FlyBySystem) and the various types of audio in traffic call this System

 

where ECSoundEmitterComponent can be mounted, similar to AudioSource:

 

In-game audio will be mounted to the PlaybackSystem first, as if the Audio is placed inside the Graph first, and then the audio is turned off temporarily and turned on when needed:

var playbackSystem = <SamplePlaybackSystem>();
(clip);

 

And to actually go ahead and use it, it's handled separately elsewhere, as you can see reading the cached AudioClip via GetInstanceID:

var sample = ();
AddClip(clip);
(sample, new AdditiveState());
(sample, new SamplePlayback { Volume = 1, Loop = 1, Pitch = 1 });
(sample, new SharedAudioClip { ClipInstanceID = () });
m_SampleEntities.Add(sample);

 

Finally, looking at the sound implementation, it seems that there is no corresponding interface, and it is also timed to play and remove mounts in a similar way to the mounted AudioClip.

The idea is not similar to Wwise/FMod either, there is no event logic, just performance system design.

4.4.2 ChunkEntityEnumerable

Simplifies page-turning when traversing Chunks in a Job by means of the tool class ChunkEntityEnumerable:

public bool MoveNext()
{
    if (++elementIndex >= currChunkLength)
    {
        if (++chunkIndex >= )
        {
            return false;
        }

        elementIndex = 0;
        currChunk = chunks[chunkIndex].GetNativeArray(entityType);
        currChunkLength = ;
    }
    return true;
}

4.5 Traffic Traffic Logic Processing

This is probably one of the most learnable features of MegaCity, the MegaCity Demo uses the Cinemachine Path for player paths, and the NPC ship uses the path of the

4.5.1 Road treatment

Write your own:

If you need to edit the Path, you need to check Show All Handles, and Show Coloured Roads is the switch to view the road network.

Is On Ramp is used to mark the main road (ramp) and Percetage Chance For On Ramp is used to mark the probability of entering the main road from the branch.

tickShow Coloured Roads:

 

4.5.2 NPC Airship Pathfinding Processing

The NPC ship gets the road information via Path and does the path calculation via CatmullRom interpolation, which is very clever in that it utilizes the

The derivative of CatmulRom yields the rate of change of the curve and this is used as a coefficient to achieve a smooth transition for the ship:

public void Execute(ref VehiclePathing p, ref VehicleTargetPosition pos, [ReadOnly] ref VehiclePhysicsState physicsState)
{
    var rs = RoadSections[];

    float3 c0 = (rs.p0, rs.p1, rs.p2, rs.p3, );
    float3 c1 = (rs.p0, rs.p1, rs.p2, rs.p3, );
    float3 c2 = (rs.p0, rs.p1, rs.p2, rs.p3, );

    float curveSpeed = length(c1);

     = c0;
     = ;

    if (lengthsq( - c0) < kMaxTetherSquared)
    {
         +=  /  *  / curveSpeed * DeltaTimeSeconds;
    }
}

But why you have to divide by arcLength is less clear.

 

5. Miscellaneous

5.1 ComponentDataFromEntity<T> Fast Mapping of Components by Entity

Take the code in the Demo as an example:

foreach (var newFlyby in New)//New = Entities
{
    var positional = PositionalFromEntity[newFlyby];

Components can be obtained directly through this class, which is currently renamed in the new version of ECS:

ComponentLookup<T>

 

5.2 DelayLineDopplerHack

This script is placed outside of the Script folder and is not physically installed in the project, it uses a more HACK approach to process the audio directly, and the

Attempts to realize the Haas HAAS effect:

var haasDelay = (int)((s.m_Attenuation[0] - s.m_Attenuation[1]) * m_Haas * (c * 2 - 1));
var delaySamples =  (delaySamplesBase + haasDelay, 0, maxLength);

 

Haas showed experimentally that if the time difference Δt between two sound waves from the same source reaches the listener within 5~35ms, a person cannot distinguish between them.diphthongsource, to give a person the orientation of the auditory sense is only the leading sound (ahead of the source), lagging sound does not seem to exist; if the delay time Δt in 35 ~ 50ms, the human ear began to perceive the existence of lagging source, but the auditory sense to do to discriminate the orientation is still the leading source; if the time difference Δt> 50ms, the human ear will be able to distinguish the leading sound and the lagging source of the orientation, that is, usually can hear clear echoes. This description of the different delays of the dual sound sources to the human ear is called the Haas effect. This effect helps to establishstereo soundlistening environment

 

 


 

Unity2022 New MegacityDemo Download:/de/demos/megacity-competitive-action-sample

Unity multiplayer online version of Megacity:/cn/demos/megacity-competitive-action-sample

Unity 2019 old version Megacity download:/t/megacity-feedback-discussion/736246/81?page=5

Book of the Dead Book of the Dead Demo Project Review and Study:/hont/p/