I have to make a project Distributed rendering of a 3d image. I can use standard algorithms. The aim is to learn hadoop and not image processing. So can any one suggest what language should I use c++ or java and some standard implementation of a 3d renderer. Any other help would be highly useful ..
相关问题
- Is GLFW designed to use without LWJGL (in java)?
- Spark on Yarn Container Failure
- Scaling of the point sprites (Direc3D 9)
- Unity3D WebGL Headless not rendering
- How does gl_ClipVertex work relative to gl_ClipDis
相关文章
- Java写文件至HDFS失败
- mapreduce count example
- Algorithm for partially filling a polygonal mesh
- Robust polygon normal calculation
- Keep constant number of visible circles in 3D anim
- How do I remove axis from a rotation matrix?
- How to smooth the blocks of a 3D voxel world?
- Mayavi: rotate around y axis
Hadoop uses Map/Reduce functions for its data processing. The data gets split up into manageable chunks for processing (Map phase), then recombined to give the result (Reduce phase).
There are specific languages for data processing (see Pig and Hive), or you can write your own M/R scripts using Java, C++, python etc.
I don't know anything about image processing, but if you're going to use Hadoop your first task will be to figure out how you can break your problem down into chunks which can be passed to the M/R process. Michael Noll's Map/Reduce tutorial may help you get started.
HTH