Location>code7788 >text

C# to realize CCTV domestic Linux microphone camera push flow (source code, Galaxy Kirin, Unicom UOS)

Popularity:686 ℃/2024-10-23 14:58:06

With the changes in the international political and economic situation, especially the increasingly fierce competition between China and the United States in science and technology, the localization of software CCTF has become imminent. In this environment, we gradually migrate the existing Windows version software to CCTF localization infrastructure, adapting domestic operating system (such as Galaxy Kirin, Unicom UOS), Guoxin chips (such as Feiteng, Kunpeng, Haikuang, Longxin, Kirin) and domestic DB.

We often have such needs, for example, we need to implement RTMP on Galaxy Kirin or Unicorn UOS to push streaming camera video and microphone sound to the streaming media server (e.g. nginx or srs), so how to realize this?

I. Technical program

To accomplish this, specifically, the following technical issues need to be addressed:

(1) Microphone data acquisition.

(2) Camera data acquisition.

(3) AAC encoding of audio data.

(4) H264 encoding of video data.

(5) Push the encoded data to the streaming server according to RTMP protocol.

(6) Synchronization of audio-video is ensured by timestamping (PTS).

NET Core (C#), the cross-platform UI framework Avalonia, and with the help of LinuxCapture and these two components, it's easy to capture microphone and camera data and push it to a streaming server.

Let's take a look at how the Push Stream program works on Galaxy Kirin:

 

Two drop-down lists allow you to select the microphone and camera devices to be used.

Click the "Start" button and the microphone and camera will start to capture data and push it to the streaming server.

If the network is disconnected in the middle of the process, the push stream will be interrupted and an automatic reconnection will be attempted, and the push stream will be resumed after a successful reconnection.

Clicking the "End" button will end the audio/video capture and streaming.

II. Concrete realization

(1) ICameraCapturer is the camera video capture component; IMicrophoneCapturer is the microphone sound capture component.

(2) We can create the corresponding collector instance by calling the CreateXXXX method of CapturerFactory.

(3) After getting the collector instance, call the Start method to start the collection; call the Stop method to stop the collection.

(4) The collected data will be exposed through the corresponding events (ImageCaptured, AudioCaptured), and we can get the collected data by booking these events.

(5) Feed the data you get to IStreamPusher, and it will be pushed to the specified streaming server.

We list the core code here, the complete code you can download the source code from the end of the article to understand.

Create and start the collector:

            //Camera Collector
            this.cameraCapturer = (cameraIndex, videoSize, frameRate);
            this. += CameraCapturer_ImageCaptured;
            this. += CameraCapturer_CaptureError;
            //microphone pickup
            this.microphoneCapturer = (micIndex);
            this. += MicrophoneCapturer_AudioCaptured;
            this. += MicrophoneCapturer_CaptureError;
 
            this.();
            this.();

Create and start the pusher:

           string nginxServerIP = ["NginxServerIP"]; 
           int nginxServerPort = int.Parse(["NginxServerPort"]); 
           string rtmpUrl = $"rtmp://{nginxServerIP}:{nginxServerPort}/hls/{streamID}";
           this.streamPusher.UpsideDown4RGB24 = true;
           this.(rtmpUrl, , , , , this.channelCount);

Feed the collected data to the pusher:

private void CameraCapturer_ImageCaptured(byte[] agba32Data)
{ 
    if (this.isRecording)
    {
        this.(agba32Data); 
        (() =>
        {
            WriteableBitmap writeableBitmap = CreateBitmapFromPixelData(agba32Data, , );
             = writeableBitmap;
        }); 
    }
}
 
private void MicrophoneCapturer_AudioCaptured(byte[] pcm)
{
    if (this.isRecording)
    {
        this.(pcm);
    }
}

The fader encodes the audio and video data internally and sends it to the streaming server according to the RTMP protocol.

Stop pushing the stream:

private void FinishRecorded(bool success)
{ 
    this.RecordState_Changed(false);
    this.cameraCapturer?.Stop();
    this.cameraCapturer?.Dispose();
    this.microphoneCapturer?.Stop();
    this.microphoneCapturer?.Dispose();
    this.streamPusher?.Close();
    string tip = success ? "Push the flow to stop!" : "The pusher disconnects and the push flow stops!";
    ShowStateMsg(tip); 
}

III. Deployment operations

To run the RTMP pushstream program here on Galaxy Kirin or Unicom UOS, you need to install .NET Core 3.1 on the target OS now.

Then copy the netcoreapp3.1 folder in the VS generated directory to the target computer, enter the netcoreapp3.1 folder, open the terminal, and enter the following command in the terminal:

dotnet Oraycn_Avalonias_PusherDemo.

Once you enter and run it, the UI interface from the previous screenshot will appear, and then we can preview the camera and start pushing the streaming microphone camera.

IV. Source Code Download

The unmanaged libraries included in the source code are for the X64 architecture, if you need to run the program on other architectures of domestic chips, you can contact me to get the unmanaged libraries for the corresponding architectures.