What is the best way to implement a cross-platform

2020-05-19 02:41发布

Part of the development team I work with has been given the challenge of writing a server for integration with our product. We have some low-level sensor devices that provide a C SDK, and we want to share them over a network for use by people collecting data. Sounds simple, right? Someone would connect a sensor device to their machine in one part of the building and run our server, thus sharing the device(s) with the rest of the network. Then a client would connect to that server via our application and collect sensor readings from the device.

I created a simple, language-agnostic network protocol, and a reference implementation in Java. The problem is creating an implementation that will work with our devices that only provide an SDK written in C. We were thinking of doing the following:

  1. Create polling threads that collect and store the most recent readings from each connected device.
  2. Use a multi-threaded server to spin off each incoming connection to a worker thread.
  3. When a worker thread receives a request for a sensor reading, the most recent value collected by the polling thread is sent back to the client.

That's a lot of threading, especially in C. So, to review, the general requirements are:

  • Runs on Windows XP/Vista, Linux, and OS X machines
  • Written in C or C++, to interact with the C SDK we have
  • Accepts a variable number of simultaneous connections (worker threads)
  • Must use threads, not forking (don't want to deal with another layer of IPC)

Can anyone suggest a library and preferably some example code to get use started?

11条回答
霸刀☆藐视天下
2楼-- · 2020-05-19 03:18

I'd also like to recommend The Spread Toolkit, which (according to the project's web site) is "an open source toolkit that provides a high performance messaging service that is resilient to faults across local and wide area networks". I have used it couple of times in situations which sound very similar to yours. Essentially, it gives you the server that frankodwyer suggests.

The Spread daemon (i.e., the server) ain't multithreaded, but it's really fast and scales well up to at least hundreds or thousands of clients. Moreover, the protocol caters for reliable IP multicasting, which (in a multi-client environment) may give you (performance-wise) a definite edge against anything implemented using point-to-point TCP or UDP connections only. (But: do not try to implement reliable IP multicasting yourself... apparently, the Spread project has produced a number of PhD/MSc theses as a side-product - or is it the toolkit that is the side-product while the main emphasis was always academic research? I don't really know...).

Since Spread has C and Java client APIs (plus Python), it sounds like a very good match to your problem. They have two licensing models; the first alternative is close to BSD's. And it's cross-platform of course (regarding both the client, and the server).

However, Spread will (of course) not do everything for you. Most prominently perhaps, it does not do persistence (i.e., if your client is offline or otherwise unable to receive messages, Spread will not buffer them, beyond a very small number of msgs at least). But fortunately it's not really too difficult to roll your own persistence implementation on top of what Spread does guarantee (well I don't even know if such an issue is important for you, or not). Second, Spread limits your messages to 100 kilobytes each, but that limit is also quite easy to circumvent simply by making the sender chop a big message into a number of smaller ones and then concatenating 'em at the receiver.

查看更多
闹够了就滚
3楼-- · 2020-05-19 03:19

I agree with frankodwyer, flip the protocol from a pull model to a push model.

Have the computer with the connected sensor broadcast the readings over UDP multicast at 100Hz (or whatever makes since for your sensors) whenever the sharing service is running. Then write clients that read the multicast data.

Alternatively you could use broadcast UDP instead of multicast.

BTW, this is how many GPS, Lidar, and other sensors do things.

查看更多
不美不萌又怎样
4楼-- · 2020-05-19 03:22

The best way to write such a server is not to write one, and to rearchitect your system so it is not necessary, and/or to reuse components that already exist. Because:

Someone would connect a sensor device to their machine in one part of the building and run our server, thus sharing the device(s) with the rest of the network.

This also has the potential to share the entire machine with rest of the network, if your code has a vulnerability (which it probably will, as you're writing it in C++ from scratch and inventing a new protocol).

So, do it the other way around. Install a simple client on the machine that has the sensor hardware, then run it either all the time, or periodically, and have it push (post) results to a central server. The central server could even be a standard web server. Or it could be a database. (Notice that both of these have been written already - no need to reinvent the wheel ;-)

Your application then works the same way you have in mind now, however it collects data from the database rather than the sensors. The part running on the machine with the sensor, however, has shrunk from a multi-threaded custom server nightmare, to a nice little single threaded command line client that only makes outgoing connections, and which can be run from cron (or equivalent on windows).

Even if you need real time data collection (and from your description it sounds like you do not) it still may be better for the sensor collector be a client and not a server. Let it open a long lived connection to a central collector (or a group of them) and await instructions to provide its data.

edit: ceretullis and pukku's answers suggest a nice variation on this using multicast - see this answer and the comments

查看更多
ら.Afraid
5楼-- · 2020-05-19 03:22

Use a Cross-Platform API or Create your own API which you change on each Architecture.

Also revise this: http://www.goingware.com/tips/getting-started/

查看更多
▲ chillily
6楼-- · 2020-05-19 03:26

I highly recommend you consider the prototype design pattern.

I used this pattern to write a protocol agnostic server in C++ that I have used for everything from HTTP Web Services to custom proprietary binary protocols.

Basically, the idea is this:

The Server takes care of accept()ing incoming connections on a particular port and creating threads (or processes) to handle those connections.

When you're trying to build a generic server you realize that you cannot really read or write any data without making assumptions about the protocol... So, the trick is to use the prototype pattern.

Create a "ConnectionHandlerBase" class with a pure "HandleConnection() = 0" method. Make the users of the server class subclass this class with their own implementation. Additionally, this class implements a "Clone()" method that returns a copy of itself... This way the server can create new instances of it without needing to know its type... Then when you get a connection, call "Clone()" on your prototypical instance and have the handling thread call "HandleConnection()" on this object.

At application startup, the user of the server class has to call something like this:

"Server.AttachConnectionPrototype( &MyConnectionObject );"

查看更多
放荡不羁爱自由
7楼-- · 2020-05-19 03:29

If you want to use C (and not C++), the NSPR library might provide what you need...

查看更多
登录 后发表回答