WritableFactories是个工厂类,通过它可以创建实例对象。原文中对此对象的注释是Factories for non-public writables,我的理解是处理那些不确定类型的对象进行实例化。例如在ObjectWritable的readObject方法中就调用了WritableFactories
public static Object readObject(DataInput in, ObjectWritable objectWritable, Configuration conf) throws IOException { ....... // Writable Class instanceClass = null; String str = UTF8.readString(in); instanceClass = loadClass(conf, str); Writable writable = WritableFactories.newInstance(instanceClass, conf); writable.readFields(in); instance = writable; ..... return instance; }
WritableFactories提供了注册机制,用户可以对某种类型自定义一个WritableFactory,然后调用WritableFactories的setFactory方法进行注册。WritableFactories.setFactory() 需要两个参数,分别是
注册类对应的类对象和能够构造注册类的WritableFactory 接口的实现
private static final Map<Class, WritableFactory> CLASS_TO_FACTORY = new ConcurrentHashMap<Class, WritableFactory>(); private WritableFactories() {} // singleton /** Define a factory for a class. */ public static void setFactory(Class c, WritableFactory factory) { CLASS_TO_FACTORY.put(c, factory); }
在HDFS的Block类中,就调用了此方法
public class Block implements Writable, Comparable<Block> { public static final String BLOCK_FILE_PREFIX = "blk_"; public static final String METADATA_EXTENSION = ".meta"; static { // register a ctor WritableFactories.setFactory (Block.class, new WritableFactory() { @Override public Writable newInstance() { return new Block(); } }); }
WritableFactories的核心方法当然是创建对接咯:newInstance。它的处理逻辑很简单,首先通过类型找到对应的工厂类,然后用工厂类创建实例对象,如果没有工厂类,则用Java的反射机制创建
/** Create a new instance of a class with a defined factory. */ public static Writable newInstance(Class<? extends Writable> c, Configuration conf) { //根据类型获取对应的工厂实现类 WritableFactory factory = WritableFactories.getFactory(c); if (factory != null) { //通过实现工厂创建实例 Writable result = factory.newInstance(); //如果实例是可配置的,注入Configuration对象 if (result instanceof Configurable) { ((Configurable) result).setConf(conf); } return result; } else { //如果没有对应实现类,则用反射创建实例 return ReflectionUtils.newInstance(c, conf); } }
相关推荐
赠送jar包:hadoop-common-2.7.3.jar; 赠送原API文档:hadoop-common-2.7.3-javadoc.jar; 赠送源代码:hadoop-common-2.7.3-sources.jar; 赠送Maven依赖信息文件:hadoop-common-2.7.3.pom; 包含翻译后的API文档...
hadoop-common-2.4.1.jar,是学习基础的Hadoop必须的包
赠送jar包:hadoop-yarn-common-2.6.5.jar 赠送原API文档:hadoop-yarn-common-2.6.5-javadoc.jar 赠送源代码:hadoop-yarn-common-2.6.5-sources.jar 包含翻译后的API文档:hadoop-yarn-common-2.6.5-javadoc-...
hadoop-common-2.7.2.jar
赠送jar包:hadoop-common-2.6.5.jar 赠送原API文档:hadoop-common-2.6.5-javadoc.jar 赠送源代码:hadoop-common-2.6.5-sources.jar 包含翻译后的API文档:hadoop-common-2.6.5-javadoc-API文档-中文(简体)版....
hadoop-common-2.2.0-bin-master(包含windows端开发Hadoop和Spark需要的winutils.exe),Windows下IDEA开发Hadoop和Spark程序会报错,原因是因为如果本机操作系统是windows,在程序中使用了hadoop相关的东西,比如写入...
hadoop-common-3.3.0.jar
hadoop-common-2.7.1需要的工具包,其中包括hadoop.dll,winutils.exe
hadoop-common 2.6.0,hadoop-common 2.6.3,hadoop-common 2.6.4,hadoop-common 2.7.1,hadoop-common 2.8.0,hadoop-common 2.8.1
hadoop-common-2.7.5.jar,可以直接使用,需要用的直接下载即可。
hadoop-common-2.2.0-bin-master 用于widows本地hadoop hava api开发
hadoop-common-2.2.0-bin-master(包含windows端开发Hadoop2.2需要的winutils.exe)
hadoop-common-2.6.0-bin-master、
赠送jar包:hadoop-mapreduce-client-common-2.6.5.jar; 赠送原API文档:hadoop-mapreduce-client-common-2.6.5-javadoc.jar; 赠送源代码:hadoop-mapreduce-client-common-2.6.5-sources.jar; 赠送Maven依赖信息...
hadoop-annotations-3.1.1.jar hadoop-common-3.1.1.jar hadoop-mapreduce-client-core-3.1.1.jar hadoop-yarn-api-3.1.1.jar hadoop-auth-3.1.1.jar hadoop-hdfs-3.1.1.jar hadoop-mapreduce-client-hs-3.1.1.jar ...
hadoop-common-2.7.1-bin-master-master.zip, 用于Hadoop的bin目录下没有winutils.exe文件时
hadoop-common-2.7.3.jar 下载
hadoop-2.7.2-connon.jar,重新编译了其中的NativeIO,可以用在windows下,不会报UnsatisfiedLinkedError了
赠送jar包:hadoop-yarn-common-2.5.1.jar; 赠送原API文档:hadoop-yarn-common-2.5.1-javadoc.jar; 赠送源代码:hadoop-yarn-common-2.5.1-sources.jar; 包含翻译后的API文档:hadoop-yarn-common-2.5.1-...