有限的并发级别任务调度器(带任务优先级)处理的包裹任务(Limited concurrency le

2019-07-03 19:30发布

我有一个很难找到一个任务调度上,我可以安排优先任务,而且还可以处理“包装”的任务。 这多少有点像什么Task.Run试图解决,但你不能指定任务调度器Task.Run 。 我一直在使用一个QueuedTaskScheduler从并行扩展额外的样品 ,解决了任务优先级要求(也被这个建议后 )。

这是我的例子:

class Program
{
    private static QueuedTaskScheduler queueScheduler = new QueuedTaskScheduler(targetScheduler: TaskScheduler.Default, maxConcurrencyLevel: 1);
    private static TaskScheduler ts_priority1;
    private static TaskScheduler ts_priority2;
    static void Main(string[] args)
    {
        ts_priority1 = queueScheduler.ActivateNewQueue(1);
        ts_priority2 = queueScheduler.ActivateNewQueue(2);

        QueueValue(1, ts_priority2);
        QueueValue(2, ts_priority2);
        QueueValue(3, ts_priority2);
        QueueValue(4, ts_priority1);
        QueueValue(5, ts_priority1);
        QueueValue(6, ts_priority1);

        Console.ReadLine();           
    }

    private static Task QueueTask(Func<Task> f, TaskScheduler ts)
    {
        return Task.Factory.StartNew(f, CancellationToken.None, TaskCreationOptions.HideScheduler | TaskCreationOptions.DenyChildAttach, ts);
    }

    private static Task QueueValue(int i, TaskScheduler ts)
    {
        return QueueTask(async () =>
        {
            Console.WriteLine("Start {0}", i);
            await Task.Delay(1000);
            Console.WriteLine("End {0}", i);
        }, ts);
    }
}

上面的例子的典型输出结果是:

Start 4
Start 5
Start 6
Start 1
Start 2
Start 3
End 4
End 3
End 5
End 2
End 1
End 6

我要的是:

Start 4
End 4
Start 5
End 5
Start 6
End 6
Start 1
End 1
Start 2
End 2
Start 3
End 3

编辑:

我想我在寻找一个任务调度,类似于QueuedTaskScheduler ,这将解决这个问题。 但是,任何其他建议都欢迎。

Answer 1:

不幸的是,这不能用解决TaskScheduler ,因为他们总是在工作Task的水平,以及async方法几乎总是包含多个Task秒。

您应该使用SemaphoreSlim连同优先调度。 另外,您也可以使用AsyncLock (其中也包括在我的AsyncEx库 )。

class Program
{
  private static QueuedTaskScheduler queueScheduler = new QueuedTaskScheduler(targetScheduler: TaskScheduler.Default, maxConcurrencyLevel: 1);
  private static TaskScheduler ts_priority1;
  private static TaskScheduler ts_priority2;
  private static SemaphoreSlim semaphore = new SemaphoreSlim(1);
  static void Main(string[] args)
  {
    ts_priority1 = queueScheduler.ActivateNewQueue(1);
    ts_priority2 = queueScheduler.ActivateNewQueue(2);

    QueueValue(1, ts_priority2);
    QueueValue(2, ts_priority2);
    QueueValue(3, ts_priority2);
    QueueValue(4, ts_priority1);
    QueueValue(5, ts_priority1);
    QueueValue(6, ts_priority1);

    Console.ReadLine();           
  }

  private static Task QueueTask(Func<Task> f, TaskScheduler ts)
  {
    return Task.Factory.StartNew(f, CancellationToken.None, TaskCreationOptions.HideScheduler | TaskCreationOptions.DenyChildAttach, ts).Unwrap();
  }

  private static Task QueueValue(int i, TaskScheduler ts)
  {
    return QueueTask(async () =>
    {
      await semaphore.WaitAsync();
      try
      {
        Console.WriteLine("Start {0}", i);
        await Task.Delay(1000);
        Console.WriteLine("End {0}", i);
      }
      finally
      {
        semaphore.Release();
      }
    }, ts);
  }
}


Answer 2:

我能找到的最好的解决办法是让我自己的版本QueuedTaskScheduler (原中发现的并行扩展额外的样品源代码)。

我加了一个bool awaitWrappedTasks参数的的构造QueuedTaskScheduler

public QueuedTaskScheduler(
        TaskScheduler targetScheduler,
        int maxConcurrencyLevel,
        bool awaitWrappedTasks = false)
{
    ...
    _awaitWrappedTasks = awaitWrappedTasks;
    ...
}

public QueuedTaskScheduler(
        int threadCount,
        string threadName = "",
        bool useForegroundThreads = false,
        ThreadPriority threadPriority = ThreadPriority.Normal,
        ApartmentState threadApartmentState = ApartmentState.MTA,
        int threadMaxStackSize = 0,
        Action threadInit = null,
        Action threadFinally = null,
        bool awaitWrappedTasks = false)
{
    ...
    _awaitWrappedTasks = awaitWrappedTasks;

    // code starting threads (removed here in example)
    ...
}

然后我修改了ProcessPrioritizedAndBatchedTasks()方法是async

private async void ProcessPrioritizedAndBatchedTasks()

然后我修改了代码只是其中执行预定任务之后的部分:

private async void ProcessPrioritizedAndBatchedTasks()
{
    bool continueProcessing = true;
    while (!_disposeCancellation.IsCancellationRequested && continueProcessing)
    {
        try
        {
            // Note that we're processing tasks on this thread
            _taskProcessingThread.Value = true;

            // Until there are no more tasks to process
            while (!_disposeCancellation.IsCancellationRequested)
            {
                // Try to get the next task.  If there aren't any more, we're done.
                Task targetTask;
                lock (_nonthreadsafeTaskQueue)
                {
                    if (_nonthreadsafeTaskQueue.Count == 0) break;
                    targetTask = _nonthreadsafeTaskQueue.Dequeue();
                }

                // If the task is null, it's a placeholder for a task in the round-robin queues.
                // Find the next one that should be processed.
                QueuedTaskSchedulerQueue queueForTargetTask = null;
                if (targetTask == null)
                {
                    lock (_queueGroups) FindNextTask_NeedsLock(out targetTask, out queueForTargetTask);
                }

                // Now if we finally have a task, run it.  If the task
                // was associated with one of the round-robin schedulers, we need to use it
                // as a thunk to execute its task.
                if (targetTask != null)
                {
                    if (queueForTargetTask != null) queueForTargetTask.ExecuteTask(targetTask);
                    else TryExecuteTask(targetTask);

                    // ***** MODIFIED CODE START ****
                    if (_awaitWrappedTasks)
                    {
                        var targetTaskType = targetTask.GetType();
                        if (targetTaskType.IsConstructedGenericType && typeof(Task).IsAssignableFrom(targetTaskType.GetGenericArguments()[0]))
                        {
                            dynamic targetTaskDynamic = targetTask;
                            // Here we await the completion of the proxy task.
                            // We do not await the proxy task directly, because that would result in that await will throw the exception of the wrapped task (if one existed)
                            // In the continuation we then simply return the value of the exception object so that the exception (stored in the proxy task) does not go totally unobserved (that could cause the process to crash)
                            await TaskExtensions.Unwrap(targetTaskDynamic).ContinueWith((Func<Task, Exception>)(t => t.Exception), TaskContinuationOptions.ExecuteSynchronously);
                        }
                    }
                    // ***** MODIFIED CODE END ****
                }
            }
        }
        finally
        {
            // Now that we think we're done, verify that there really is
            // no more work to do.  If there's not, highlight
            // that we're now less parallel than we were a moment ago.
            lock (_nonthreadsafeTaskQueue)
            {
                if (_nonthreadsafeTaskQueue.Count == 0)
                {
                    _delegatesQueuedOrRunning--;
                    continueProcessing = false;
                    _taskProcessingThread.Value = false;
                }
            }
        }
    }
}

法变更ThreadBasedDispatchLoop有点不同,因为我们不能使用async关键字否则我们将打破在专用线程(或多个)执行计划任务的功能。 因此,这里是修改后的版本ThreadBasedDispatchLoop

private void ThreadBasedDispatchLoop(Action threadInit, Action threadFinally)
{
    _taskProcessingThread.Value = true;
    if (threadInit != null) threadInit();
    try
    {
        // If the scheduler is disposed, the cancellation token will be set and
        // we'll receive an OperationCanceledException.  That OCE should not crash the process.
        try
        {
            // If a thread abort occurs, we'll try to reset it and continue running.
            while (true)
            {
                try
                {
                    // For each task queued to the scheduler, try to execute it.
                    foreach (var task in _blockingTaskQueue.GetConsumingEnumerable(_disposeCancellation.Token))
                    {
                        Task targetTask = task;
                        // If the task is not null, that means it was queued to this scheduler directly.
                        // Run it.
                        if (targetTask != null)
                        {
                            TryExecuteTask(targetTask);
                        }
                        // If the task is null, that means it's just a placeholder for a task
                        // queued to one of the subschedulers.  Find the next task based on
                        // priority and fairness and run it.
                        else
                        {
                            // Find the next task based on our ordering rules...                                    
                            QueuedTaskSchedulerQueue queueForTargetTask;
                            lock (_queueGroups) FindNextTask_NeedsLock(out targetTask, out queueForTargetTask);

                            // ... and if we found one, run it
                            if (targetTask != null) queueForTargetTask.ExecuteTask(targetTask);
                        }

                        if (_awaitWrappedTasks)
                        {
                            var targetTaskType = targetTask.GetType();
                            if (targetTaskType.IsConstructedGenericType && typeof(Task).IsAssignableFrom(targetTaskType.GetGenericArguments()[0]))
                            {
                                dynamic targetTaskDynamic = targetTask;
                                // Here we wait for the completion of the proxy task.
                                // We do not wait for the proxy task directly, because that would result in that Wait() will throw the exception of the wrapped task (if one existed)
                                // In the continuation we then simply return the value of the exception object so that the exception (stored in the proxy task) does not go totally unobserved (that could cause the process to crash)
                                TaskExtensions.Unwrap(targetTaskDynamic).ContinueWith((Func<Task, Exception>)(t => t.Exception), TaskContinuationOptions.ExecuteSynchronously).Wait();
                            }
                        }
                    }
                }
                catch (ThreadAbortException)
                {
                    // If we received a thread abort, and that thread abort was due to shutting down
                    // or unloading, let it pass through.  Otherwise, reset the abort so we can
                    // continue processing work items.
                    if (!Environment.HasShutdownStarted && !AppDomain.CurrentDomain.IsFinalizingForUnload())
                    {
                        Thread.ResetAbort();
                    }
                }
            }
        }
        catch (OperationCanceledException) { }
    }
    finally
    {
        // Run a cleanup routine if there was one
        if (threadFinally != null) threadFinally();
        _taskProcessingThread.Value = false;
    }
}

我测试了这一点,它即为所需的输出。 这种技术也可用于任何其他调度。 如LimitedConcurrencyLevelTaskSchedulerOrderedTaskScheduler



Answer 3:

我认为这是不可能实现这一目标。 一个核心问题似乎是一个TaskScheduler只能用于运行代码。 但也有不运行代码的任务,如IO任务或定时器任务。 我不认为TaskScheduler基础设施可用于安排这些。

从的TaskScheduler的角度看,它看起来像这样:

1. Select a registered task for execution
2. Execute its code on the CPU
3. Repeat

步骤(2)是同步的,这意味着Task要执行必须开始和结束的步骤(2)的一部分。 这意味着,这个Task不能做异步IO,因为那将是非阻塞的。 在这个意义上, TaskScheduler只支持阻塞代码。

我想你的最好办法是实现自己的版本来服务AsyncSemaphore释放服务员的优先顺序和不节流。 您的异步方法可以等待在非阻塞方式信号。 所以没有必要启动一个自定义的内部定制主题全部CPU的工作可以在默认的线程池运行TaskScheduler 。 IO任务可以继续使用非阻塞IO。



文章来源: Limited concurrency level task scheduler (with task priority) handling wrapped tasks