public interface ItemReader {
Object read() throws Exception;
void mark() throws MarkFailedException;
void reset() throws ResetFailedException;
}
same thing in 2.0 version given below.public interface ItemReader<T> {
T read() throws Exception, UnexpectedInputException, ParseException;
}
public interface ItemWriter<T> {
void write(List<? extends T> items) throws Exception;
}
As you can see, ItemReader now supports the generic type, T, which is returned from read. You may also notice that mark and reset have been removed. This is due to step processing strategy changes, which are discussed below. Many other interfaces have been similarly updated.

<job id="job">
<step id="stepA">
<next on="FAILED" to="stepB"></next>
<next on="*" to="stepC"></next>
</step>
<step id="stepB" next="stepC"></step>
<step id="stepC"></step>
</job>
ItemReader returns one Object (the 'item') which is then handed to the ItemWriter, periodically committing when the number of items hits the commit interval. For example, if the commit interval is 5, ItemReader and ItemWriter will each be called 5 times. This is illustrated in a simplified code example below:for(int i = 0; i < commitInterval; i++){
Object item = itemReader.read();
itemWriter.write(item);
}


List items = new Arraylist();
for(int i = 0; i < commitInterval; i++){
items.add(itemReader.read());
}
itemWriter.write(items);
In previous version 1.x.x, Steps had only two dependencies, ItemReader and ItemWriter:


JobRepository interface represents basic CRUD operations with Job meta-data. However, it may also be useful to query the meta-data. For that reason, the JobExplorer and JobOperator interfaces have been created:
PartitionHandler and StepExecutionSplitter. The PartitionHandler is the one that knows about the execution fabric - it has to transmit requests to remote steps and collect the results using whatever grid or remoting technology is available. PartitionHandler is an SPI, and Spring Batch provides one implementation out of the box for local execution through a TaskExecutor. This will be useful immediately when parallel processing of heavily IO bound tasks is required, since in those cases remote execution only complicates the deployment and doesn't necessarily help much with the performance. Other implementations will be specific to the execution fabric. (e.g. one of the grid providers such as IBM, Oracle, Terracotta, Appistry etc.), Spring Batch makes no preference for any of grid provider over another.<bean class="org.springframework.batch.core.job.SimpleJob" id="myEmpExpireJob">
<property name="steps">
<list>
<!-- Step bean details ommitted for clarity -->
<bean id="readEmployeeData"></bean>
<bean id="writeEmployeeData"></bean>
<bean id="employeeDataProcess"></bean>
</list>
</property>
<property name="jobRepository" ref="jobRepository"></property>
</bean>
In version 2.X.X, the equivalent would be:<job id="myEmpExpireJob">
<!-- Step bean details ommitted for clarity -->
<step id="readEmployeeData" next="writeEmployeeData"></step>
<step id="writeEmployeeData" next="employeeDataProcess"></step>
<step id="employeeDataProcess"></step>
</job>
Labels: Spring Batch