This question is inspired by some comments in an earlier Stackoverflow article on the same topic and also motivated by some code I'm writing as well. Given the example contained therein, I'm somewhat convinced that this pattern is tail recursive. If this is the case, how do I mitigate the memory leak posed by accumulating futures whose underlying threads never join the ForkJoinPool from which they were spawned?
import com.ning.http.client.AsyncHttpClientConfig.Builder
import play.api.libs.iteratee.Iteratee
import play.api.libs.iteratee.Execution.Implicits.defaultExecutionContext
import play.api.libs.ws.ning.NingWSClient
import scala.util.{Success,Failure}
object Client {
val client = new NingWSClient(new Builder().build())
def print = Iteratee.foreach { chunk: Array[Byte] => println(new String(chunk)) }
def main(args: Array[String]) {
connect()
def connect(): Unit = {
val consumer = client.url("http://streaming.resource.com")
consumer.get(_ => print).onComplete {
case Success(s) => println("Success")
case Failure(f) => println("Recursive retry"); connect()
}
}
}
}
In the example I've shared, the get[A](...)
method returns a Future[Iteratee[Array[Byte],A]]
. The author of the above article I've included remarks that "scala.concurrent Futures don't get merged" once they return but that Twitter's futures some how manage this. I'm using the PlayFramework implementation, however, which uses futures provided by the standard Scala 2.1X library.
Do any of you have evidence that support or dismiss these claims? Does my code pose a memory leak?