The automotive trade has been quickly evolving with technological developments that improve the driving expertise and security. Amongst these improvements, the Android Automotive Working System (AAOS) has stood out, providing a flexible and customizable platform for automotive producers.
The Exterior View System (EVS) is a complete camera-based system designed to supply drivers with real-time visible monitoring of their automobile’s environment. It sometimes consists of a number of cameras positioned across the automobile to get rid of blind spots and improve situational consciousness, considerably aiding in maneuvers like parking and lane modifications. By integrating with superior driver help techniques, EVS contributes to elevated security and comfort for drivers.
For extra detailed details about EVS and its configuration, we extremely suggest studying our article “Android AAOS 14 – Encompass View Parking Digital camera: Configure and Launch EVS (Exterior View System).” This foundational article supplies important insights and directions that we are going to construct upon on this information.
The newest Android Automotive Working System, AAOS 14, presents new potentialities, nevertheless it doesn’t natively assist Ethernet cameras. On this article, we describe our implementation of an Ethernet digital camera integration with the Exterior View System (EVS) on Android.
Our method entails connecting a USB digital camera to a Home windows laptop computer and streaming the video utilizing the Actual-time Transport Protocol (RTP). By using the highly effective FFmpeg software program, the video stream will likely be broadcast and described in an SDP (Session Description Protocol) file, accessible through an HTTP server. On the Android aspect, we’ll make the most of the FFmpeg library to obtain and decode the video stream, successfully bringing the digital camera feed into the AAOS 14 setting.
This text supplies a step-by-step information on how we achieved this integration of the EVS community digital camera, providing insights and sensible directions for these seeking to implement the same resolution. The next diagram supplies an outline of your entire course of:
Constructing FFmpeg Library for Android
To allow RTP digital camera streaming on Android, step one is to construct the FFmpeg library for the platform. This part describes the method intimately, utilizing the ffmpeg-android-maker challenge. Observe these steps to efficiently construct and combine the FFmpeg library with the Android EVS (Exterior View System) Driver.
Step 1: Set up Android SDK
First, set up the Android SDK. For Ubuntu/Debian techniques, you need to use the next instructions:
sudo apt replace && sudo apt set up android-sdk
The SDK needs to be put in in /usr/lib/android-sdk.
Step 2: Set up NDK
Obtain the Android NDK (Native Growth Package) from the official web site:
https://developer.android.com/ndk/downloads
After downloading, extract the NDK to your required location.
Step 3: Construct FFmpeg
Clone the ffmpeg-android-maker repository and navigate to its listing:
git clone https://github.com/Javernaut/ffmpeg-android-maker.git
cd ffmpeg-android-maker
Set the setting variables to level to the SDK and NDK:
export ANDROID_SDK_HOME=/usr/lib/android-sdk
export ANDROID_NDK_HOME=/path/to/ndk/
Run the construct script:
./ffmpeg-android-maker.sh
This script will obtain FFmpeg supply code and dependencies, and compile FFmpeg for numerous Android architectures.
Step 4: Copy Library Recordsdata to EVS Driver
After the construct course of is full, copy the .so library recordsdata from construct/ffmpeg/ to the EVS Driver listing in your Android challenge:
cp construct/ffmpeg/*.so /path/to/android/challenge/packages/companies/Automotive/cpp/evs/sampleDriver/aidl/
Step 5: Add Libraries to EVS Driver Construct Recordsdata
Edit the Android.bp file within the aidl listing to incorporate the prebuilt FFmpeg libraries:
cc_prebuilt_library_shared
identify: "rtp-libavcodec",
vendor: true,
srcs: ["libavcodec.so"],
strip:
none: true,
,
check_elf_files: false,
cc_prebuilt_library
identify: "rtp-libavformat",
vendor: true,
srcs: ["libavformat.so"],
strip:
none: true,
,
check_elf_files: false,
cc_prebuilt_library
identify: "rtp-libavutil",
vendor: true,
srcs: ["libavutil.so"],
strip:
none: true,
,
check_elf_files: false,
cc_prebuilt_library_shared
identify: "rtp-libswscale",
vendor: true,
srcs: ["libswscale.so"],
strip:
none: true,
,
check_elf_files: false,
Add prebuilt libraries to EVS Driver app:
cc_binary
identify: "android.{hardware}.automotive.evs-default",
defaults: ["android.hardware.graphics.common-ndk_static"],
vendor: true,
relative_install_path: "hw",
srcs: [
":libgui_frame_event_aidl",
"src/*.cpp"
],
shared_libs: [
"rtp-libavcodec",
"rtp-libavformat",
"rtp-libavutil",
"rtp-libswscale",
"[email protected]",
"[email protected]",
[email protected],
....]
By following these steps, you should have efficiently constructed the FFmpeg library for Android and built-in it into the EVS Driver.
EVS Driver RTP Digital camera Implementation
On this chapter, we are going to display the way to rapidly implement RTP assist for the EVS (Exterior View System) driver in Android AAOS 14. This implementation is for demonstration functions solely. For manufacturing use, the implementation needs to be optimized, tailored to particular necessities, and all potential configurations and edge instances needs to be completely examined. Right here, we are going to focus solely on displaying the video stream from RTP.
The primary recordsdata accountable for capturing and decoding video from USB cameras are applied within the EvsV4lCamera and VideoCapture lessons. To deal with RTP, we are going to copy these lessons and rename them to EvsRTPCamera and RTPCapture. RTP dealing with will likely be applied in RTPCapture. We have to implement 4 essential features:
bool open(const char* deviceName, const int32_t width = 0, const int32_t top = 0);
void shut();
bool startStream(std::perform<void(RTPCapture*, imageBuffer*, void*)> callback = nullptr);
void stopStream();
We are going to use the official instance from the FFmpeg library, https://github.com/FFmpeg/FFmpeg/blob/grasp/doc/examples/demux_decode.c, which decodes the required video stream into RGBA buffers. After adapting the instance, the RTPCapture.cpp file will appear to be this:
#embody "RTPCapture.h"
#embody <android-base/logging.h>
#embody <errno.h>
#embody <error.h>
#embody <fcntl.h>
#embody <reminiscence.h>
#embody <stdio.h>
#embody <stdlib.h>
#embody <sys/ioctl.h>
#embody <sys/mman.h>
#embody <unistd.h>
#embody <cassert>
#embody <iomanip>
#embody <stdio.h>
#embody <stdlib.h>
#embody <iostream>
#embody <fstream>
#embody <sstream>
static AVFormatContext *fmt_ctx = NULL;
static AVCodecContext *video_dec_ctx = NULL, *audio_dec_ctx;
static int width, top;
static enum AVPixelFormat pix_fmt;
static enum AVPixelFormat out_pix_fmt = AV_PIX_FMT_RGBA;
static AVStream *video_stream = NULL, *audio_stream = NULL;
static struct SwsContext *resize;
static const char *src_filename = NULL;
static uint8_t *video_dst_data[4] = NULL;
static int video_dst_linesize[4];
static int video_dst_bufsize;
static int video_stream_idx = -1, audio_stream_idx = -1;
static AVFrame *body = NULL;
static AVFrame *frame2 = NULL;
static AVPacket *pkt = NULL;
static int video_frame_count = 0;
int RTPCapture::output_video_frame(AVFrame *body)
LOG(INFO) << "Video_frame: " << video_frame_count++
<< " ,scale top: " << sws_scale(resize, frame->knowledge, frame->linesize, 0, top, video_dst_data, video_dst_linesize);
if (mCallback)
imageBuffer buf;
buf.index = video_frame_count;
buf.size = video_dst_bufsize;
mCallback(this, &buf, video_dst_data[0]);
return 0;
int RTPCapture::decode_packet(AVCodecContext *dec, const AVPacket *pkt)
int ret = 0;
ret = avcodec_send_packet(dec, pkt);
if (ret < 0)
return ret;
// get all of the accessible frames from the decoder
whereas (ret >= 0)
ret = avcodec_receive_frame(dec, body);
if (ret < 0)
// write the body knowledge to output file
if (dec->codec->kind == AVMEDIA_TYPE_VIDEO)
ret = output_video_frame(body);
av_frame_unref(body);
if (ret < 0)
return ret;
return 0;
int RTPCapture::open_codec_context(int *stream_idx,
AVCodecContext **dec_ctx, AVFormatContext *fmt_ctx, enum AVMediaType kind)
int ret, stream_index;
AVStream *st;
const AVCodec *dec = NULL;
ret = av_find_best_stream(fmt_ctx, kind, -1, -1, NULL, 0);
if (ret < 0)
fprintf(stderr, "Couldn't discover %s stream in enter file '%s'n",
av_get_media_type_string(kind), src_filename);
return ret;
else
stream_index = ret;
st = fmt_ctx->streams[stream_index];
/* discover decoder for the stream */
dec = avcodec_find_decoder(st->codecpar->codec_id);
if (!dec)
fprintf(stderr, "Failed to search out %s codecn",
av_get_media_type_string(kind));
return AVERROR(EINVAL);
/* Allocate a codec context for the decoder */
*dec_ctx = avcodec_alloc_context3(dec);
if (!*dec_ctx)
fprintf(stderr, "Didn't allocate the %s codec contextn",
av_get_media_type_string(kind));
return AVERROR(ENOMEM);
/* Copy codec parameters from enter stream to output codec context */
if ((ret = avcodec_parameters_to_context(*dec_ctx, st->codecpar)) < 0)
fprintf(stderr, "Failed to repeat %s codec parameters to decoder contextn",
av_get_media_type_string(kind));
return ret;
av_opt_set((*dec_ctx)->priv_data, "preset", "ultrafast", 0);
av_opt_set((*dec_ctx)->priv_data, "tune", "zerolatency", 0);
/* Init the decoders */
if ((ret = avcodec_open2(*dec_ctx, dec, NULL)) < 0)
fprintf(stderr, "Didn't open %s codecn",
av_get_media_type_string(kind));
return ret;
*stream_idx = stream_index;
return 0;
bool RTPCapture::open(const char* /*deviceName*/, const int32_t /*width*/, const int32_t /*top*/)
LOG(INFO) << "RTPCapture::open";
int ret = 0;
avformat_network_init();
mFormat = V4L2_PIX_FMT_YUV420;
mWidth = 1920;
mHeight = 1080;
mStride = 0;
/* open enter file, and allocate format context */
if (avformat_open_input(&fmt_ctx, "http://192.168.1.59/stream.sdp", NULL, NULL) < 0)
LOG(ERROR) << "Couldn't open community stream";
return false;
LOG(INFO) << "Enter opened";
isOpened = true;
/* retrieve stream data */
if (avformat_find_stream_info(fmt_ctx, NULL) < 0)
LOG(ERROR) << "Couldn't discover stream data";
return false;
LOG(INFO) << "Stream data discovered";
if (open_codec_context(&video_stream_idx, &video_dec_ctx, fmt_ctx, AVMEDIA_TYPE_VIDEO) >= 0)
video_stream = fmt_ctx->streams[video_stream_idx];
/* allocate picture the place the decoded picture will likely be put */
width = video_dec_ctx->width;
top = video_dec_ctx->top;
pix_fmt = video_dec_ctx->sw_pix_fmt;
resize = sws_getContext(width, top, AV_PIX_FMT_YUVJ422P,
width, top, out_pix_fmt, SWS_BICUBIC, NULL, NULL, NULL);
LOG(ERROR) << "RTPCapture::open pix_fmt: " << video_dec_ctx->pix_fmt
<< ", sw_pix_fmt: " << video_dec_ctx->sw_pix_fmt
<< ", my_fmt: " << pix_fmt;
ret = av_image_alloc(video_dst_data, video_dst_linesize,
width, top, out_pix_fmt, 1);
if (ret < 0)
LOG(ERROR) << "Couldn't allocate uncooked video buffer";
return false;
video_dst_bufsize = ret;
av_dump_format(fmt_ctx, 0, src_filename, 0);
if (!audio_stream && !video_stream)
LOG(ERROR) << "Couldn't discover audio or video stream within the enter, aborting";
ret = 1;
return false;
body = av_frame_alloc();
if (!body)
LOG(ERROR) << "Couldn't allocate body";
ret = AVERROR(ENOMEM);
return false;
frame2 = av_frame_alloc();
pkt = av_packet_alloc();
if (!pkt)
LOG(ERROR) << "Couldn't allocate packet";
ret = AVERROR(ENOMEM);
return false;
return true;
void RTPCapture::shut()
LOG(DEBUG) << __FUNCTION__;
bool RTPCapture::startStream(std::perform<void(RTPCapture*, imageBuffer*, void*)> callback)
LOG(INFO) << "startStream";
if(!isOpen())
LOG(ERROR) << "startStream failed. Stream not opened";
return false;
stop_thread_1 = false;
mCallback = callback;
mCaptureThread = std::thread([this]() collectFrames(); );
return true;
void RTPCapture::stopStream()
LOG(INFO) << "stopStream";
stop_thread_1 = true;
mCaptureThread.be a part of();
mCallback = nullptr;
bool RTPCapture::returnFrame(int i)
LOG(INFO) << "returnFrame" << i;
return true;
void RTPCapture::collectFrames()
int ret = 0;
LOG(INFO) << "Studying frames";
/* learn frames from the file */
whereas (av_read_frame(fmt_ctx, pkt) >= 0)
if (stop_thread_1)
return;
if (pkt->stream_index == video_stream_idx)
ret = decode_packet(video_dec_ctx, pkt);
av_packet_unref(pkt);
if (ret < 0)
break;
int RTPCapture::setParameter(v4l2_control&)
LOG(INFO) << "RTPCapture::setParameter";
return 0;
int RTPCapture::getParameter(v4l2_control&)
LOG(INFO) << "RTPCapture::getParameter";
return 0;
std::set<uint32_t> RTPCapture::enumerateCameraControls()
LOG(INFO) << "RTPCapture::enumerateCameraControls";
std::set<uint32_t> ctrlIDs;
return std::transfer(ctrlIDs);
void* RTPCapture::getLatestData()
LOG(INFO) << "RTPCapture::getLatestData";
return nullptr;
bool RTPCapture::isFrameReady()
LOG(INFO) << "RTPCapture::isFrameReady";
return true;
void RTPCapture::markFrameConsumed(int i)
LOG(INFO) << "RTPCapture::markFrameConsumed body: " << i;
bool RTPCapture::isOpen()
LOG(INFO) << "RTPCapture::isOpen";
return isOpened;
Subsequent, we have to modify EvsRTPCamera to make use of our RTPCapture class as a substitute of VideoCapture. In EvsRTPCamera.h, add:
#embody "RTPCapture.h"
And exchange:
VideoCapture mVideo = ;
with:
RTPCapture mVideo = ;
In EvsRTPCamera.cpp, we additionally must make modifications. Within the forwardFrame(imageBuffer* pV4lBuff, void* pData) perform, exchange:
mFillBufferFromVideo(bufferDesc, (uint8_t*)targetPixels, pData, mVideo.getStride());
with:
memcpy(targetPixels, pData, pV4lBuff->size);
It is because the VideoCapture class supplies a buffer from the digital camera in numerous YUYV pixel codecs. The mFillBufferFromVideo perform is accountable for changing the pixel format to RGBA. In our case, RTPCapture already supplies an RGBA buffer. That is finished within the
int RTPCapture::output_video_frame(AVFrame *body) perform utilizing sws_scale from the FFmpeg library.
Now we have to be certain that our RTP digital camera is acknowledged by the system. The EvsEnumerator class and its enumerateCameras perform are accountable for detecting cameras. This perform provides all video recordsdata from the /dev/ listing.
So as to add our RTP digital camera, we are going to append the next code on the finish of the enumerateCameras perform:
if (addCaptureDevice("rtp1"))
++captureCount;
This can add a digital camera with the ID “rtp1” to the record of detected cameras, making it seen to the system.
The ultimate step is to switch the EvsEnumerator: :openCamera perform to direct the digital camera with the ID “rtp1” to the RTP implementation. Usually, when opening a USB digital camera, an occasion of the EvsV4lCamera class is created:
pActiveCamera = EvsV4lCamera::Create(id.knowledge());
In our instance, we are going to hardcode the ID test and create the suitable object:
if (id == "rtp1")
pActiveCamera = EvsRTPCamera::Create(id.knowledge());
else
pActiveCamera = EvsV4lCamera::Create(id.knowledge());
With this implementation, our digital camera ought to begin working. Now we have to construct the EVS Driver utility and push it to the system together with the FFmpeg libraries:
mmma packages/companies/Automotive/cpp/evs/sampleDriver/
adb push out/goal/product/rpi4/vendor/bin/hw/android.{hardware}.automotive.evs-default /vendor/bin/hw/
Launching the RTP Digital camera
To stream video out of your digital camera, it is advisable set up FFmpeg (https://www.ffmpeg.org/download.html#build-windows) and an HTTP server on the pc that will likely be streaming the video.
Begin FFmpeg (instance on Home windows):
ffmpeg -f dshow -video_size 1280x720 -i video="USB Digital camera" -c copy -f rtp rtp://192.168.1.53:8554
the place:
- -video_size is video decision
- “USB Digital camera” is the identify of the digital camera because it seems within the Gadget Supervisor
- “-c copy” implies that particular person frames from the digital camera (in JPEG format) will likely be copied to the RTP stream with out modifications. In any other case, FFmpeg would wish to decode and re-encode the picture, introducing pointless delays.
- “rtp://192.168.1.53:8554”: 192.168.1.53 is the IP deal with of our Android system. You need to regulate this accordingly. Port 8554 might be left because the default.
After beginning FFmpeg, it’s best to see output much like this on the console:
Right here, we see the enter, output, and SDP sections. Within the enter part, the codec is JPEG, which is what we’d like. The pixel format is yuvj422p, with a decision of 1920×1080 at 30 fps. The stream parameters within the output part ought to match.
Subsequent, save the SDP part to a file named stream.sdp on the HTTP server. Our EVS Driver utility must fetch this file, which describes the stream.
In our instance, the Android system ought to entry this file at: http://192.168.1.59/stream.sdp
The precise content material of the file needs to be:
v=0
o=- 0 0 IN IP4 127.0.0.1
s=No Title
c=IN IP4 192.168.1.53
t=0 0
a=instrument:libavformat 61.1.100
m=video 8554 RTP/AVP 26
Now, restart the EVS Driver utility on the Android system:
killall android.{hardware}.automotive.evs-default
Then, configure the EVS app to make use of the digital camera “rtp1”. For detailed directions on the way to configure and launch the EVS (Exterior View System), consult with the article “Android AAOS 14 – Encompass View Parking Digital camera: Configure and Launch EVS (Exterior View System)”.
Efficiency Testing
On this chapter, we are going to measure and examine the latency of the video stream from a digital camera related through USB and RTP.
How Did We Measure Latency?
- Setup Timer: Displayed a timer on the pc display screen displaying time with millisecond precision.
- Digital camera Seize: Pointed the EVS digital camera at this display screen in order that the timer was additionally seen on the Android system display screen.
- Snapshot Comparability: Took images of each screens concurrently. The time displayed on the Android system was delayed in comparison with the pc display screen. The distinction in time between the pc and the Android system represents the digital camera’s latency.
This latency consists of a number of components:
- Digital camera Latency: The time the digital camera takes to seize the picture from the sensor and encode it into the suitable format.
- Transmission Time: The time taken to transmit the information through USB or RTP.
- Decoding and Show: The time to decode the video stream and show the picture on the display screen.
Latency Comparability
Beneath are the images displaying the latency:
USB Digital camera
RTP Digital camera
From these measurements, we discovered that the common latency for a digital camera related through USB to the Android system is 200ms, whereas the latency for the digital camera related through RTP is 150ms. This result’s fairly stunning.
The explanations behind these outcomes are:
- The EVS implementation on Android captures video from the USB digital camera in YUV and comparable codecs, whereas FFmpeg streams RTP video in JPEG format.
- The USB digital camera used has the next latency in producing YUV photographs in comparison with JPEG. Moreover, the body price is way decrease. For a decision of 1280×720, the YUV format solely helps 10 fps, whereas JPEG helps the complete 30 fps.
All digital camera modes might be checked utilizing the command:
ffmpeg -f dshow -list_options true -i video="USB Digital camera"
Conclusion
This text has taken you thru the excellent technique of integrating an RTP digital camera into the Android EVS (Exterior View System) framework, highlighting the detailed steps concerned in each the implementation and the efficiency analysis.
We started our journey by creating new lessons, EvsRTPCamera and RTPCapture, which had been particularly designed to deal with RTP streams utilizing FFmpeg. This adaptation allowed us to course of and stream real-time video successfully. To make sure our system acknowledged the RTP digital camera, we made important changes to the EvsEnumerator class. By customizing the enumerateCameras and openCamera features, we ensured that our RTP digital camera was accurately instantiated and acknowledged by the system.
Subsequent, we centered on constructing and deploying the EVS Driver utility, together with the required FFmpeg libraries, to our goal Android system. This step was essential for validating our implementation in a real-world setting. We additionally performed an in depth efficiency analysis to measure and examine the latency of video feeds from USB and RTP cameras. Utilizing a timer displayed on a pc display screen, we captured the timer with the EVS digital camera and in contrast the time proven on each the pc and Android screens. This methodology allowed us to precisely decide the latency launched by every digital camera setup.
Our efficiency assessments revealed that the RTP digital camera had a median latency of 150ms, whereas the USB digital camera had a latency of 200ms. This consequence was surprising however extremely informative. The decrease latency of the RTP digital camera was largely attributable to the usage of the JPEG format, which our explicit USB digital camera dealt with much less effectively attributable to its slower YUV processing. This important discovering underscores the RTP digital camera’s suitability for purposes requiring real-time video efficiency, corresponding to automotive encompass view parking techniques, the place fast response occasions are important for security and person expertise.