Live video stream on server (PC) from images sent

2019-02-02 16:17发布

Hmm. I found this which seems promising:

http://sourceforge.net/projects/mjpg-streamer/


Ok. I will try to explain what I am trying to do clearly and in much detail.

I have a small humanoid robot with camera and wifi stick (this is the robot). The robot's wifi stick average wifi transfer rate is 1769KB/s. The robot has 500Mhz CPU and 256MB RAM so it is not enough for any serious computations (moreover there are already couple modules running on the robot for motion, vision, sonar, speech etc).

I have a PC from which I control the robot. I am trying to have the robot walk around the room and see a live stream video of what the robot sees in the PC.

What I already have working. The robot is walking as I want him to do and taking images with the camera. The images are being sent through UDP protocol to the PC where I am receiving them (I have verified this by saving the incoming images on the disk).

The camera returns images which are 640 x 480 px in YUV442 colorspace. I am sending the images with lossy compression (JPEG) because I am trying to get the best possible FPS on the PC. I am doing the compression to JPEG on the robot with PIL library.

My questions:

  1. Could somebody please give me some ideas about how to convert the incoming JPEG images to a live video stream? I understand that I will need some video encoder for that. Which video encoder do you recommend? FFMPEG or something else? I am very new to video streaming so I want to know what is best for this task. I'd prefer to use Python to write this so I would prefer some video encoder or library which has Python API. But I guess if the library has some good command line API it doesn't have to be in Python.

  2. What is the best FPS I could get out from this? Given the 1769KB/s average wifi transfer rate and the dimensions of the images? Should I use different compression than JPEG?

  3. I will be happy to see any code examples. Links to articles explaining how to do this would be fine, too.

Some code samples. Here is how I am sending JPEG images from robot to the PC (shortened simplified snippet). This runs on the robot:

# lots of code here

UDPSock = socket(AF_INET,SOCK_DGRAM)

  while 1:
    image = camProxy.getImageLocal(nameId)
    size = (image[0], image[1])
    data = image[6]
    im = Image.fromstring("YCbCr", size, data)
    s = StringIO.StringIO()
    im.save(s, "JPEG")

    UDPSock.sendto(s.getvalue(), addr)

    camProxy.releaseImage(nameId)

  UDPSock.close()

  # lots of code here

Here is how I am receiving the images on the PC. This runs on the PC:

  # lots of code here

  UDPSock = socket(AF_INET,SOCK_DGRAM)
  UDPSock.bind(addr)

  while 1:
    data, addr = UDPSock.recvfrom(buf)
    # here I need to create a stream from the data
    # which contains JPEG image

  UDPSock.close()

  # lots of code here

2条回答
再贱就再见
2楼-- · 2019-02-02 17:03

Checking out your first question. Though the solution here uses a non-streaming set of pictures. It might help. The example uses pyMedia.

Some along the lines of what you want.

If you have a need to edit a binary stream:

查看更多
\"骚年 ilove
3楼-- · 2019-02-02 17:04

Try pyffmpeg and test each available codec for the best performance. You probably need a very lightweight codec like Smoke or low profile H263 or x264, and you probably need to drop the resolution to 320x240.

You have a trade off between latency of the video encoding and decoding and the bandwidth used, you might find dropping down to 160x120 with raw packets for a quick scene analysis and only periodically transmitting a full frame. You could also mix a raw, low latency, low resolution, high update feed with a high compressed, high latency, high resolution, low update feed.

查看更多
登录 后发表回答