mybatis框架(下)(一、二级缓存)
上篇文章提到查询时会用到缓存,其内置的两级缓存如下:
// 一级缓存,在executor中,与sqlsession绑定
// org.apache.ibatis.executor.BaseExecutor#localCache
// 指向org.apache.ibatis.cache.impl.PerpetualCache#cache
private Map
- 一、二级缓存都是查询缓存,select写入,insert、update、delete则清除
- 一、二级缓存均指向
org.apache.ibatis.cache.impl.PerpetualCache#cache
,本质是一个HashMap - 一、二级缓存Key的计算方式一致,均指向
org.apache.ibatis.executor.BaseExecutor#createCacheKey
,Key的本质:statement的id + offset + limit + sql + param参数
- 一级缓存生命周期和SqlSession一致,默认开启;二级缓存声明周期和SqlSessionFactory一致,需手动开启
- 相同namespace使用同一个二级缓存;二级缓存和事务关联,事务提交数据才会写入缓存,事务回滚则不会写入
一级缓存 一级缓存的生命周期是sqlSession;在同一sqlSession中,用相同sql和查询条件多次查询DB情况,非首次查询会命中一级缓存。
一级缓存默认是开启的,如果想关闭需要增加配置
// == 如果不设置,默认是SESSION(后续的源码分析会涉及这里)
以查询方法作为入口
org.apache.ibatis.session.defaults.DefaultSqlSession#selectList(java.lang.String, java.lang.Object, org.apache.ibatis.session.RowBounds)
org.apache.ibatis.executor.BaseExecutor#query(org.apache.ibatis.mapping.MappedStatement, java.lang.Object, org.apache.ibatis.session.RowBounds, org.apache.ibatis.session.ResultHandler)
List query(MappedStatement ms, Object parameter, RowBounds rowBounds, ResultHandler resultHandler) throws SQLException {
BoundSql boundSql = ms.getBoundSql(parameter);
// == 计算CacheKey
CacheKey key = createCacheKey(ms, parameter, rowBounds, boundSql);
// == 查询中使用缓存
return query(ms, parameter, rowBounds, resultHandler, key, boundSql);
}
CacheKey计算
org.apache.ibatis.executor.BaseExecutor#createCacheKey
CacheKey createCacheKey(MappedStatement ms, Object parameterObject, RowBounds rowBounds, BoundSql boundSql) {
CacheKey cacheKey = new CacheKey();
// == 调用update方法修改cache
cacheKey.update(ms.getId());
cacheKey.update(rowBounds.getOffset());
cacheKey.update(rowBounds.getLimit());
cacheKey.update(boundSql.getSql());
// value是参数
cacheKey.update(value);
return cacheKey;
}
从这里就可以猜测到,CacheKey和statement的id、offset、limit、sql、param参数有关。
进入CacheKey验证这个猜测:
### CacheKey类 ###// 默认37
private final int multiplier;
// 默认17
private int hashcode;
private long checksum;
private int count;
private List updateList;
public void update(Object object) {
int baseHashCode = object == null ? 1 : ArrayUtil.hashCode(object);
// -- 修改几个属性值
count++;
checksum += baseHashCode;
baseHashCode *= count;
hashcode = multiplier * hashcode + baseHashCode;
// -- updateList新增对象
updateList.add(object);
}public boolean equals(Object object) {
// -- 比较几个属性值
if (hashcode != cacheKey.hashcode) {
return false;
}
if (checksum != cacheKey.checksum) {
return false;
}
if (count != cacheKey.count) {
return false;
}
// -- 挨个比较updateList中的对象
for (int i = 0;
i < updateList.size();
i++) {
Object thisObject = updateList.get(i);
Object thatObject = cacheKey.updateList.get(i);
if (!ArrayUtil.equals(thisObject, thatObject)) {
return false;
}
}
return true;
}
查询中使用缓存
org.apache.ibatis.executor.BaseExecutor#query(org.apache.ibatis.mapping.MappedStatement, java.lang.Object, org.apache.ibatis.session.RowBounds, org.apache.ibatis.session.ResultHandler, org.apache.ibatis.cache.CacheKey, org.apache.ibatis.mapping.BoundSql)
List query(MappedStatement ms, Object parameter, RowBounds rowBounds, ResultHandler resultHandler, CacheKey key, BoundSql boundSql) throws SQLException {
List list;
try {
queryStack++;
// == 1.先从localCache获取数据
list = resultHandler == null ? (List) localCache.getObject(key) : null;
if (list != null) {
handleLocallyCachedOutputParameters(ms, key, parameter, boundSql);
}
// == 2.缓存中无数据,从数据库查询
else {
list = queryFromDatabase(ms, parameter, rowBounds, resultHandler, key, boundSql);
}
}// ## 如果scope设置成STATEMENT类型,会清理一级缓存
if (configuration.getLocalCacheScope() == LocalCacheScope.STATEMENT) {
// 清理缓存
clearLocalCache();
}
return list;
}
继续观察
代码2
位置:List queryFromDatabase(MappedStatement ms, Object parameter, RowBounds rowBounds, ResultHandler resultHandler, CacheKey key, BoundSql boundSql) throws SQLException {
List list;
// 缓存占位,表示正在执行
localCache.putObject(key, EXECUTION_PLACEHOLDER);
try {
// == 查询DB逻辑
list = doQuery(ms, parameter, rowBounds, resultHandler, boundSql);
} finally {
localCache.removeObject(key);
}
// == 执行结果放入一级缓存
localCache.putObject(key, list);
if (ms.getStatementType() == StatementType.CALLABLE) {
localOutputParameterCache.putObject(key, parameter);
}
return list;
}
综上,查询过程会向localCache中存放查询结果。
只不过设置scope为STATEMENT时,每次都会清空缓存——这就是一级缓存失效的秘密。
增删改清理缓存 insert和delete方法都会执行update:
public int insert(String statement) {
return insert(statement, null);
}public int delete(String statement) {
return update(statement, null);
}
【mybatis框架(下)(一、二级缓存)】于是观察update即可:
int update(MappedStatement ms, Object parameter) throws SQLException {
ErrorContext.instance().resource(ms.getResource()).activity("executing an update").object(ms.getId());
if (closed) {
throw new ExecutorException("Executor was closed.");
}
// == 清理一级缓存
clearLocalCache();
return doUpdate(ms, parameter);
}
二级缓存 二级缓存需要打开开关:
- 第1步
- 第二步
默认的,二级缓存的key是namespace,如果要引用其它命名空间的Cache配置,可以使用如下标签:
CachingExecutor 二级缓存的入口在executor创建位置:
public Executor newExecutor(Transaction transaction, ExecutorType executorType) {
Executor executor;
if (ExecutorType.BATCH == executorType) {
executor = new BatchExecutor(this, transaction);
} else if (ExecutorType.REUSE == executorType) {
executor = new ReuseExecutor(this, transaction);
} else {
// 默认创建SimpleExecutor
executor = new SimpleExecutor(this, transaction);
}
if (cacheEnabled) {
// == 开启二级缓存情况,使用装饰器模式用CachingExecutor包了一层
executor = new CachingExecutor(executor);
}
return executor;
}
观察构造器里做了什么
// 属性互相赋值
public CachingExecutor(Executor delegate) {
this.delegate = delegate;
delegate.setExecutorWrapper(this);
}
赋值后CachingExecutor和SimpleExecutor的关系如下
文章图片
知道这一点后,我们来查看CachingExecutor的query方法:
public List query(MappedStatement ms, Object parameterObject, RowBounds rowBounds, ResultHandler resultHandler) throws SQLException {
BoundSql boundSql = ms.getBoundSql(parameterObject);
// == 调用delegate的createCacheKey方法(前面已经分析过)
CacheKey key = createCacheKey(ms, parameterObject, rowBounds, boundSql);
// == 二级缓存的查询
return query(ms, parameterObject, rowBounds, resultHandler, key, boundSql);
}
观察query方法的实现
public List query(MappedStatement ms, Object parameterObject, RowBounds rowBounds, ResultHandler resultHandler, CacheKey key, BoundSql boundSql)
throws SQLException {
// ## A.通过MappedStatement获取cache
Cache cache = ms.getCache();
if (cache != null) {
// 缓存刷新
flushCacheIfRequired(ms);
if (ms.isUseCache() && resultHandler == null) {
ensureNoOutParams(ms, boundSql);
// -- 1.通过tcm获取查询结果
List list = (List) tcm.getObject(cache, key);
if (list == null) {
// -- 2.tcm中无结果,通过原executor查询(一级缓存+jdbc逻辑)
list = delegate.query(ms, parameterObject, rowBounds, resultHandler, key, boundSql);
// -- 3.查询结果最终放入tcm中
tcm.putObject(cache, key, list);
}
return list;
}
}
return delegate.query(ms, parameterObject, rowBounds, resultHandler, key, boundSql);
}// ## B.tcm指向这里
TransactionalCacheManager tcm = new TransactionalCacheManager();
整理逻辑很简单,但又有两个问题困扰到我
- 通过MappedStatement获取到的二级缓存cache(
代码A
位置),什么时候初始化的? - 二级缓存和tcm(TransactionalCacheManager)之间有什么联系?
完整调用链如下(当作复习了):
// 创建SqlSessionFactory
org.apache.ibatis.session.SqlSessionFactoryBuilder#build(java.io.Reader, java.lang.String, java.util.Properties)
org.apache.ibatis.builder.xml.XMLConfigBuilder#parse
// configuration解析
org.apache.ibatis.builder.xml.XMLConfigBuilder#parseConfiguration
// 解析mapper
org.apache.ibatis.builder.xml.XMLConfigBuilder#mapperElement
org.apache.ibatis.builder.xml.XMLMapperBuilder#parse
org.apache.ibatis.builder.xml.XMLMapperBuilder#configurationElement{
// == 二级缓存的配置引用(执行namespace)
cacheRefElement(context.evalNode("cache-ref"));
// == 二级缓存的开启
cacheElement(context.evalNode("cache"));
}org.apache.ibatis.builder.xml.XMLMapperBuilder#cacheElement
org.apache.ibatis.builder.MapperBuilderAssistant#useNewCache{
// == 二级缓存创建
Cache cache = new CacheBuilder(currentNamespace)
// -- Cache实现是PerpetualCache
.implementation(valueOrDefault(typeClass, PerpetualCache.class))
// -- 包装器用了LruCache
.addDecorator(valueOrDefault(evictionClass, LruCache.class))
.clearInterval(flushInterval)
.size(size)
.readWrite(readWrite)
.blocking(blocking)
.properties(props)
.build();
}
看下二级缓存的整个装饰链(盗图)
文章图片
SynchronizedCache -> LoggingCache -> SerializedCache -> LruCache -> PerpetualCache。二级缓存和TransactionalCacheManager的关系 TransactionalCacheManager类:
// ## 维护一个map,key是Cache,value是TransactionalCache
Map transactionalCaches = new HashMap<>();
public Object getObject(Cache cache, CacheKey key) {// ## 1.此方法会在transactionalCaches中建立k-v关系
return getTransactionalCache(cache)
??????
transactionalCaches.computeIfAbsent(cache, TransactionalCache::new);
// == 2.从二级缓存中获取
.getObject(key);
}
再观察TransactionalCache
// == 二级缓存
private final Cache delegate;
// == 二级缓存清理标记
private boolean clearOnCommit;
// ####以下两个集合可以理解为用来存放临时数据####
// == 等事务提交时,需要加入二级缓存的对象
private final Map entriesToAddOnCommit;
// == 二级缓存中不存在的对象key
private final Set entriesMissedInCache;
public void putObject(Object key, Object object) {
// 对象记录到entriesToAddOnCommit中
entriesToAddOnCommit.put(key, object);
}public Object getObject(Object key) {
// 从二级缓存获取
Object object = delegate.getObject(key);
if (object == null) {
// 二级缓存中不存在,在entriesMissedInCache记录key
entriesMissedInCache.add(key);
}
}
这里能够看出,二级缓存和Transaction(事务)有很深的纠葛。
那么具体有什么纠葛?
- 事务提交
org.apache.ibatis.cache.TransactionalCacheManager#commit
org.apache.ibatis.cache.decorators.TransactionalCache#commit
public void commit() {
// == 刷新对象
flushPendingEntries();
}private void flushPendingEntries() {
for (Map.Entry entry : entriesToAddOnCommit.entrySet()) {
// == 对象从entriesToAddOnCommit刷新到二级缓存中
delegate.putObject(entry.getKey(), entry.getValue());
}
}
此处能证明,事务提交时对象从一个临时集合entriesToAddOnCommit刷新至二级缓存。
- 事务回滚
org.apache.ibatis.cache.decorators.TransactionalCache#rollback
public void rollback() {
unlockMissedEntries();
// == 重置,将临时集合数据清理
reset();
}private void reset() {
clearOnCommit = false;
entriesToAddOnCommit.clear();
entriesMissedInCache.clear();
}
附录 P6-P7知识合辑
推荐阅读
- 初始Egg框架
- Spring框架开发scope作用域分析总结
- vue和java实现页面增删改_Spring boot + mybatis + Vue.js + ElementUI 实现数据的增删改查实例代码(一)...
- 小牟有趣的PWN
- 【外企测试面试、笔试】分享下历时8轮、30k+的外企面试全过程
- Java|专科学生自学Java半年,直接拿下12K的offer
- Linux|【Linux篇】第三篇——Linux环境下的工具(一)(yum + vim + gcc/g++ +gdb)
- 组合筛选vue_学会这个套路,彻底掌握排列组合。【会点算法的前端更早下班】...
- 数字警务、情报研判,智慧公安系统了解一下
- PHP|PHP 基于 SW-X 框架,搭建高性能API架构(三)