可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
I'm trying to use redux-saga to connect events from PouchDB to my React.js application, but I'm struggling to figure out how to connect events emitted from PouchDB to my Saga. Since the event uses a callback function (and I can't pass it a generator), I can't use yield put()
inside the callback, it gives weird errors after ES2015 compilation (using Webpack).
So here's what I'm trying to accomplish, the part that doesn't work is inside replication.on('change' (info) => {})
.
function * startReplication (wrapper) {
while (yield take(DATABASE_SET_CONFIGURATION)) {
yield call(wrapper.connect.bind(wrapper))
// Returns a promise, or false.
let replication = wrapper.replicate()
if (replication) {
replication.on('change', (info) => {
yield put(replicationChange(info))
})
}
}
}
export default [ startReplication ]
回答1:
As Nirrek explained it, when you need to connect to push data sources, you'll have to build an event iterator for that source.
I'd like to add that the above mechanism could be made reusable. So we don't have to recreate an event iterator for each different source.
The solution is to create a generic channel with put
and take
methods. You can call the take
method from inside the Generator and connect the put
method to the listener interface of your data source.
Here is a possible implementation. Note that the channel buffers messages if no one is waiting for them (e.g. the Generator is busy doing some remote call)
function createChannel () {
const messageQueue = []
const resolveQueue = []
function put (msg) {
// anyone waiting for a message ?
if (resolveQueue.length) {
// deliver the message to the oldest one waiting (First In First Out)
const nextResolve = resolveQueue.shift()
nextResolve(msg)
} else {
// no one is waiting ? queue the event
messageQueue.push(msg)
}
}
// returns a Promise resolved with the next message
function take () {
// do we have queued messages ?
if (messageQueue.length) {
// deliver the oldest queued message
return Promise.resolve(messageQueue.shift())
} else {
// no queued messages ? queue the taker until a message arrives
return new Promise((resolve) => resolveQueue.push(resolve))
}
}
return {
take,
put
}
}
Then the above channel can be used anytime you want to listen to an external push data source. For your example
function createChangeChannel (replication) {
const channel = createChannel()
// every change event will call put on the channel
replication.on('change', channel.put)
return channel
}
function * startReplication (getState) {
// Wait for the configuration to be set. This can happen multiple
// times during the life cycle, for example when the user wants to
// switch database/workspace.
while (yield take(DATABASE_SET_CONFIGURATION)) {
let state = getState()
let wrapper = state.database.wrapper
// Wait for a connection to work.
yield apply(wrapper, wrapper.connect)
// Trigger replication, and keep the promise.
let replication = wrapper.replicate()
if (replication) {
yield call(monitorChangeEvents, createChangeChannel(replication))
}
}
}
function * monitorChangeEvents (channel) {
while (true) {
const info = yield call(channel.take) // Blocks until the promise resolves
yield put(databaseActions.replicationChange(info))
}
}
回答2:
The fundamental problem we have to solve is that event emitters are 'push-based', whereas sagas are 'pull-based'.
If you subscribe to an event like so: replication.on('change', (info) => {})
,then the callback is executed whenever the replication
event emitter decides to push a new value.
With sagas, we need to flip the control around. It is the saga that must be in control of when it decides to respond to new change info being available. Put another way, a saga needs to pull the new info.
Below is an example of one way to achieve this:
function* startReplication(wrapper) {
while (yield take(DATABASE_SET_CONFIGURATION)) {
yield apply(wrapper, wrapper.connect);
let replication = wrapper.replicate()
if (replication)
yield call(monitorChangeEvents, replication);
}
}
function* monitorChangeEvents(replication) {
const stream = createReadableStreamOfChanges(replication);
while (true) {
const info = yield stream.read(); // Blocks until the promise resolves
yield put(replicationChange(info));
}
}
// Returns a stream object that has read() method we can use to read new info.
// The read() method returns a Promise that will be resolved when info from a
// change event becomes available. This is what allows us to shift from working
// with a 'push-based' model to a 'pull-based' model.
function createReadableStreamOfChanges(replication) {
let deferred;
replication.on('change', info => {
if (!deferred) return;
deferred.resolve(info);
deferred = null;
});
return {
read() {
if (deferred)
return deferred.promise;
deferred = {};
deferred.promise = new Promise(resolve => deferred.resolve = resolve);
return deferred.promise;
}
};
}
There is a JSbin of the above example here: http://jsbin.com/cujudes/edit?js,console
You should also take a look at Yassine Elouafi's answer to a similar question:
Can I use redux-saga's es6 generators as onmessage listener for websockets or eventsource?
回答3:
We can use eventChannel
of redux-saga
Here is my example
// fetch history messages
function* watchMessageEventChannel(client) {
const chan = eventChannel(emitter => {
client.on('message', (message) => emitter(message));
return () => {
client.close().then(() => console.log('logout'));
};
});
while (true) {
const message = yield take(chan);
yield put(receiveMessage(message));
}
}
function* fetchMessageHistory(action) {
const client = yield realtime.createIMClient('demo_uuid');
// listen message event
yield fork(watchMessageEventChannel, client);
}
Please Note:
messages on an eventChannel are not buffered by default. If you want to process message event
only one by one, you cannot use blocking call after const message = yield take(chan);
Or You have to provide a buffer to the eventChannel factory in order to specify buffering strategy for the channel (e.g. eventChannel(subscriber, buffer)). See redux-saga API docs for more info
回答4:
Thanks to @Yassine Elouafi
I created short MIT licensed general channels implementation as redux-saga extension for TypeScript language based on solution by @Yassine Elouafi.
// redux-saga/channels.ts
import { Saga } from 'redux-saga';
import { call, fork } from 'redux-saga/effects';
export interface IChannel<TMessage> {
take(): Promise<TMessage>;
put(message: TMessage): void;
}
export function* takeEvery<TMessage>(channel: IChannel<TMessage>, saga: Saga) {
while (true) {
const message: TMessage = yield call(channel.take);
yield fork(saga, message);
}
}
export function createChannel<TMessage>(): IChannel<TMessage> {
const messageQueue: TMessage[] = [];
const resolveQueue: ((message: TMessage) => void)[] = [];
function put(message: TMessage): void {
if (resolveQueue.length) {
const nextResolve = resolveQueue.shift();
nextResolve(message);
} else {
messageQueue.push(message);
}
}
function take(): Promise<TMessage> {
if (messageQueue.length) {
return Promise.resolve(messageQueue.shift());
} else {
return new Promise((resolve: (message: TMessage) => void) => resolveQueue.push(resolve));
}
}
return {
take,
put
};
}
And example usage similar to redux-saga *takeEvery construction
// example-socket-action-binding.ts
import { put } from 'redux-saga/effects';
import {
createChannel,
takeEvery as takeEveryChannelMessage
} from './redux-saga/channels';
export function* socketBindActions(
socket: SocketIOClient.Socket
) {
const socketChannel = createSocketChannel(socket);
yield* takeEveryChannelMessage(socketChannel, function* (action: IAction) {
yield put(action);
});
}
function createSocketChannel(socket: SocketIOClient.Socket) {
const socketChannel = createChannel<IAction>();
socket.on('action', (action: IAction) => socketChannel.put(action));
return socketChannel;
}
回答5:
I had the same problem also using PouchDB and found the answers provided extremely useful and interesting. However there are many ways to do the same thing in PouchDB and I dug around a little and found a different approach which maybe easier to reason about.
If you don't attach listeners to the db.change
request then it returns any change data directly to the caller and adding continuous: true
to the option will cause to issue a longpoll and not return until some change has happened. So the same result can be achieved with the following
export function * monitorDbChanges() {
var info = yield call([db, db.info]); // get reference to last change
let lastSeq = info.update_seq;
while(true){
try{
var changes = yield call([db, db.changes], { since: lastSeq, continuous: true, include_docs: true, heartbeat: 20000 });
if (changes){
for(let i = 0; i < changes.results.length; i++){
yield put({type: 'CHANGED_DOC', doc: changes.results[i].doc});
}
lastSeq = changes.last_seq;
}
}catch (error){
yield put({type: 'monitor-changes-error', err: error})
}
}
}
There is one thing that I haven't got to the bottom. If I replace the for
loop with change.results.forEach((change)=>{...})
then I get an invalid syntax error on the yield
. I'm assuming it's something to do with some clash in the use of iterators.