-
摘要: 来源于计算机图形学领域的风格迁移概念引起了地图学领域的广泛关注,涌现出大量地图风格迁移算法与评价实验,同时存在地图风格概念不明、风格迁移结果缺少评价等问题。首先,从概念上分析了地图风格的含义与风格地图的适用场景;其次,对现有风格迁移方法进行综述,详细对比分析基于概率统计的、基于内容解析的以及基于神经网络的3类图像风格迁移方法;然后,解析图像到地图、遥感影像到地图以及图像到地貌晕渲等3类主要地图风格迁移方法,对比了矢量与栅格地图风格迁移的优缺点;最后,针对参考图如何选取、风格迁移结果如何评价、风格迁移如何融入地图设计3个方面,对后续地图风格迁移研究进行了展望。Abstract: Style transfer which originates from computer graphics has attracted broader attention in the field of cartography, considerable efforts have been made on cartographic style transfer algorithms and experimental evaluation. However, it also suffers from unclear demarcation of map style, and lack of evaluation of style transfer results. Therefore, firstly, this paper conceptually analyzes the idea of map style and the applicable scenarios of styled maps. Then, we review existing style transfer methods, and categorize available style transfer methods into three groups and compare them with details: Probability statistics-based, content-based, and neural network-based. We also discuss three major types of map style transfer methods: Image to map, remote sensing imagery to map, and image to relief shading. And we compare the advantages and disadvantages of style transfer with vector and raster maps. Finally, we envision the future research of map style transfer in terms of three possible research questions: How to select reference images, how to evaluate style transfer results, and how to integrate style transfer into map design process.
-
Keywords:
- map style /
- style transfer /
- artificial intelligence /
- map design /
- pan-map
-
纵观城市形成的历史进程,道路网作为一种公共框架约束着城市形态和人类行为。作为城市系统的子系统,道路网被认为是城市的“指纹”[1],路网模式反映了城市道路的分布特点,影响着城市的结构和组织,蕴含着大量城市形成和演变的内在机制。由于城市系统本身的复杂性,路网模式在不同的领域都有不同的分类体系[2]。从研究载体上来看,路网模式识别方法可以分为基于面(网眼)的和基于线的识别方法。
Heinzle等[3-4]基于图论对路网模式进行了系列研究,通过网眼中心的排列来识别网格模式;利用统计指标(Tukey深度值)和形态指标(仿射不变量)识别环状模式。田晶等[5-6]结合机器学习算法识别网格模式。Yang等[7]结合形态学指标和多指标评价提出了一种探测和量化网格模式的方法。Louf等[1]通过计算所有网眼的几何和拓扑信息得到城市的类型分布图来区分不同的城市。基于网眼的识别方法,以一种全局的观点将网眼作为抽象采样点对其进行统计描述,而忽略网眼内部的道路弧段的几何和拓扑特征。在构建网眼时也会丢失部分弧段的链接信息。
Heinzle等[3]通过霍夫变换识别直线,相交识别网格模式。Heinzle等[8]通过Dijkstra算法探测延伸道路识别放射模式。Porta等[9]将空间句法理论及中心度量度结合分析城市路网拓扑模式,并得出网格模式在城市中是普遍存在的结论。Xie等[10]对路网构图,借助信息熵的概念来区别不同结构的道路网络。Jiang等[11-13]利用空间句法和复杂网络等理论研究和定义了城市的拓扑结构。基于线的识别方法,直接以道路弧段作为基本单元,着眼于线性扩展和局部的图形结构分析,而缺少全局的上下文环境分析,例如相邻区域内的弧段拓扑结构、节点分布统计特征和相邻区域边的排列组织等。
根据格式塔认知原则,首先将目标作为一个整体获得完整的模式,继而对组成目标的细节进行分析[14]。基于面的方法用一种全局观念,以统计分析的方法进行模式识别;而基于线的方法以一种局部的图形结构分析实现模式探测。本文尝试将这两种方法中的有益思想综合,先给出一种图结构上的场模型,在线性结构上实施统计分析,然后以局部结构的特征分析实现网格模式识别。
1 网络空间栅格化
使用形态单一的线性栅格单元剖分网络空间已成为基于网络环境的空间分析的重要方法[15-18]。本文将此思想引入到路网模式识别中。定义道路交叉点和不同等级道路的分界点等特征点为网络节点,利用节点分割道路生成道路弧段,将道路网的每一条弧段都用相同的栅格单元进行剖分,构造节点-弧段、栅格单元-栅格单元和节点-栅格单元拓扑关系,建立网络剖分数据结构。
剖分粒度的细化对应着算法时间复杂度的增长[19],适当的剖分粒度取决于应用环境及路网弧段的长度。Ai等[15]提出剖分粒度可以参考网络边的平均长度设定,网络边的平均长度越大,剖分粒度越大;She等[16]使用平均长度对道路网络进行剖分并证明其合理性,并指出由于实际路网中可能存在小于设定的剖分粒度的边,最终得到的剖分结构并不是严格相等的。本文将参考此方案进行网络空间栅格化剖分粒度的确定。
本文将道路网视为嵌在2D空间中的独立空间,如图 1所示,在对象空间将道路网络空间剖分成连续分布的线性栅格单元组成的场模型。对原始数据的矢量运算转换成叠置、扩展及关系探测等地图代数运算。将模式定义为相邻栅格单元或相邻区域特征值上相互关系的表征。通过计算栅格单元邻域的拓扑和几何特征,得到栅格单元的特征向量。空间内的每一个位置都对应由一组特征值组成的特征向量,由此构建模式识别空间向量场[20-21]。本文使用监督分类的二值分类器支持向量机(support vector machine, SVM),提取网格模式的栅格单元。并利用邻近性、相似性和闭合性格式塔原则对实验结果进行完善。
2 基于向量剖分法识别网格模式
2.1 网格模式典型特征提取
在剖分粒度确定的情况下,特征提取时选取的邻域大小将直接和尺度相关。O'Sullivan等[22]认为应邻域大小应根据研究对象的特性决定。Porta等[9]和She等[16]建议根据街道尺度来确定邻域大小。本文将邻域大小作为一个可控参数,参考了图像处理中对窗口大小选择的建议,首先对数据中的目标网格的尺寸进行预判,从而设定一个邻域大小的选择范围。
网格模式是由一系列满足一定统计特征(包括几何和拓扑等)的相邻栅格单元组成的区域结构,各栅格单元在两个近似正交的方向上满足特定的排列方式。向量剖分法将其特征量化为五个指标,即方向分布指数、正交指数、隶属度、延展指数、结构指数。
(1) 方向分布指数
剖分后的栅格单元具有特定的方向,以正东作为初始方向,逆时针旋转的角度来表示栅格单元的方向。为了方便计算,将角度取值范围[0°, 180º]对18个区间, 以[1, 18]对18个区间编号,栅格单元的角度值将以其所在区间的序号代替。统计目标栅格单元邻域内所有栅格单元的角度值,得到方向分布统计图(图 2(c))。理想网格路网的方向统计图应表现为在两个区间上有聚集性(图 2(d))。本文定义方向分布指数,衡量栅格单元邻域内方向分布的双向聚集性,设置距离权重为将实际的方向分布空间映射到理想网格空间的距离的倒数。假设两个聚集方向区间序号记为a和b,区间j的权重为ωj,角度值落在区间j内的概率为pj。
$$ {\omega _j} = \max \left( {\frac{1}{{\left| {j -a} \right|}}, \frac{1}{{\left| {j -b} \right|}}} \right), {\omega _j} \in \left( {0, 1} \right] $$ (1) 式中,当j = a或b时,ωj=1。栅格单元i的方向分布指数Di为:
$$ {D_i} = \sum\nolimits_{j = 1}^{18} {{\omega _j}} \cdot {p_j},{D_i} \in {\rm{ }}\left( 0 \right.\left. {,1} \right] $$ (2) Di越大,目标栅格单元邻域的双向聚集性越强,被分类为网格模式的可能性越大。从表 1中看出,DA<DB,栅格单元A所处的环境中干扰单元即偏离两峰值方向的单元所占的比例比栅格单元B大(图 2(a), 2(b))。
表 1 栅格单元A、B指标统计Table 1. Indexes Values of Element A and Element BD V M G T A 0.907 69 1 0.2 0.842 7 0.62 B 0.960 97 1 1.0 0.663 4 0.89 (2) 正交指数
为了判断栅格单元的邻域内两个聚集方向是否近似正交,定义正交指数Vi为:
$$ {V_i} = \sin \left( {\left| {b - a} \right| \cdot \frac{{\rm{ \mathsf{ π} }}}{{18}}} \right), {V_i} \in \left[{0, 1} \right] $$ (3) Vi越大,目标栅格单元的邻域呈现出正交分布的可能性越大,在其他条件相同的情况下,也更容易被分类为网格模式。表 1中,栅格单元A和B的正交指数均为1,从图 2中来看,栅格单元A和B所处的邻域环境多为水平与竖直方向分布的栅格单元。
(3) 隶属度
当栅格单元所处的环境满足网格模式的方向分布时,仍需判断栅格单元自身是否偏离聚集方向而属于噪声单元。定义隶属度指标Mi为:
$$ {M_i} = \max \left( {\frac{1}{{\left| {j -a} \right|}}, \frac{1}{{\left| {j -b} \right|}}} \right), {M_i} \in \left( {0, 1} \right] $$ (4) 式中,当j=a或b时,Mi=1。在图 2(a)和2(b)中,栅格单元A的方向处于偏离正交方向的区间,其隶属度只有0.2(表 1)。栅格单元B则正好处于聚集区间,隶属度为1。
(4) 延展指数
网格模式下栅格单元呈现线性延展的特性,如图 3(a)中所示的蜿蜒排列和直线排列的对比。虽然网络扩展距离相同,但由于欧氏距离的差别,两条路径展现出截然不同的延展程度。定义延展指数Gi为:
$$ {G_i} = \frac{{\sum\nolimits_{j = 0}^{m - 1} {{E_{di{s_{ij}}}}} }}{{\sum\nolimits_{j = 0}^{m - 1} {{N_{di{s_{ij}}}}} }}, {G_i} \in \left[{0, 1} \right] $$ (5) Edisij为目标单元i中点到延展终点单元中点ij的欧式距离,$ {E_{di{s_{ij}}}} = \sqrt {{{\left( {{x_i} - {x_{ij}}} \right)}^2} + {{\left( {{y_i} - {y_{ij}}} \right)}^2}} $。Ndisij为相应的网络距离,Ndisij=Nij×lgapdis,Nij为目标单元i到延展终点单元中点ij之间的单元个数,lgapdis为剖分粒度。且目标单元i共有m个延展终点。
Gi值越大,邻域环境越松弛,延展性越好;相反,邻域环境越紧凑。参考表 1,GA<GB,相对于栅格单元B,栅格单元A所处的邻域环境较松散,且延展性较好。
(5) 结构指数
将构成网格的特征单元集合成两类,第一类栅格单元为骨架单元,一阶邻接度大于2,构成网格的骨架特征;第二类为连接单元,一阶邻接度小于等于2,连接骨架单元。
设共有m个扩展终点,统计每个扩展方向上的骨架栅格个数,记作CjS,每个方向上的栅格个数Cj,定义结构指数Ti为:
$$ {T_i} = \frac{{\sum\nolimits_{j = 1}^m {\frac{{C_j^S}}{{{C_j}}}} }}{m}, {T_i} \in \left[{0, 1} \right] $$ (6) 结构指数Ti的意义为平均每一个方向上扩展Ti步可以遇到一个骨架栅格。该指标决定了网格的尺寸和链接方式,同时也可以排除道路中的大枝杈(图 4(a))。Ti越大,则该栅格单元邻域内包含的网格骨架结构越多,可能包含的网格密度越大。参考表 1,栅格单元B的邻域环境所包含的骨架结构要多于栅格单元A,栅格单元B邻域的网格密度更高。
2.2 基于SVM的模式分类
由于路网数据量较大且复杂程度较高,在分类时,变量之间可能具有相关性和冲突性,难以对各指标设置单一阈值。本文采用SVM根据路网数据特征自动综合各个指标实现分类。SVM分类由软件包libsvm完成[23]。分类步骤如下。
(1) 将§2.1中提出的五个特征值的取值范围控制在[0, 1]之间,防止取值范围大的指标削弱取值范围较小的指标的作用。
(2) 选用径向基核函数(radial basis function, RBF),RBF常用于非线性分类,相较多项式核函数所设参数较少(惩罚因子C和核参数γ),可以有效降低模型复杂度。
(3) 为防止过拟合,对训练样本进行十折交叉验证,确定模型最佳参数。
(4) 采用步骤(3)确定的最佳参数所构建的分类模型对测试样本进行模式分类。
2.3 基于格式塔的图形补全和枝杈修剪
由于研究对象(栅格单元)与研究目标(网格)的不同和分类器精度的影响,难以保证分类结果的完善性。为了提高正确率,本文根据格式塔原则中的邻近性、相似性和闭合性对SVM分类结果进行优化。首先将与网格模式相邻的符合要求的背景单元合并到网格模式中,使边界向外部扩张;然后消除悬挂的边界单元,使边界向内部收缩。设定规则为:假设被识别为隶属网格模式的栅格单元记为Egrid,其他单元记为Enon_grid。
(1) 建立一个队列,将已经识别出来的栅格单元Egrid入队,取出队首单元,如果其邻接单元为Enon_grid,计算夹角AEgrid_iEnon_grid,并将夹角值记录给Enon_grid,设定阈值为10°。
$$ {A_{{{\rm{E}}_{{\rm{grid\_i}}}}{{\rm{E}}_{{\rm{non\_grid}}}}}} = \left| {{\theta _{{E_{{\rm{grid}}\_i}}}} - {\theta _{{E_{{\rm{non\_grid}}}}}}} \right| + {\theta _{{\rm{accum}}\_i}} $$ (7) 式中,θaccum_i为由SVM识别出的原始网格扩展到栅格单元i的累积夹角值。当AEgrid_iEnon_grid小于阈值时,将其类别改为Egrid,并将其入队。重复以上过程,直至队列为空。
(2) 遍历所有栅格单元,将Egrid存入链表。遍历链表的每一个元素,当其连接的Egrid个数小于2或者只在一侧有邻接单元,则将其类别改为Enon_grid;重复以上过程,直到链表元素的类别不再变化。
由于优化过程需要两次遍历所有的栅格单元,假设共有n个栅格单元,则图形补全和枝杈修剪的时间复杂度是O(2n)。
3 路网网格模式识别
本文使用Python实现路网网格模式识别的实验系统,在i5-2640 m/2.8 GHZ/4 G/Wind-ows7的环境下,选用中国深圳市1: 10 000比例尺的道路网数据作为实验数据。对数据进行必要的预处理,删除立交桥,对道路进行拓扑检查,删除伪节点,并在交叉点处打断。预处理后深圳市道路网弧段平均长度为308 m,采用300 m的栅格单元剖分道路网。预判出网格尺寸的端点值为4步和12步,参考此范围,本文采用10步邻域。
图 5中,训练样本区域(图 7中蓝色矩形框内)内道路细节较为丰富,包含不同分布范围和不同排列方式的网格模式,同时具有大枝杈和大弯曲类型的道路。通过交叉验证达到的最大模型训练正确率为85%。将训练得到的模型用于测试样本,截取深圳市福田区的一块区域观察识别结果的细节变化,SVM分类结果不能保证网格模式边缘的完整性(图 6(a)),经过图形补全即膨胀操作后(图 6(b)),网格模式的边界向外扩张,最后通过枝杈修剪减掉毛糙的枝杈(图 6(c))。将测试样本的最终识别结果与目视判别选取的网格模式相比,以正确分类的网格栅格单元个数与目视判别网格栅格单元个数的比值计算正确率,以正确分类的网格栅格单元个数与向量剖分法识别出的网格栅格单元个数的比值计算召回率。向量剖分法识别网格模式的正确率达到91.30%,召回率达到82.91%。
为了对比向量剖分法与传统方法的识别效率,本研究将Yang等[7]和Heinzle等[8]提出的两种经典算法作为参考,识别结果如图 8所示。据表 2可以发现,向量剖分法在召回率相差较小的情况下,取得了更高的正确率。召回率相对较低,这和样本选择相关,由于城市路网本身的复杂性,样本选择会丢失一定的信息。从识别结果细节特征来看,如图 7、图 8中方框I中所示,对于阶梯状模式,向量剖分法能够很好的排除,而两组对比实验由于不考虑上下文,使得误将其分为网格模式。而对于方框II中显示的复杂节点连接组成的网格模式,Heinzle等[8]的方法不能有效识别。
4 结语
本文提出向量剖分法识别网格模式,该方法以路网栅格化处理后的栅格单元为基本单元,以邻域特征描述栅格单元,结合SVM和格式塔原则实现分类。向量剖分法从整体上把握道路网模式具有空间认知的一览性,识别方法上建立一种新的空间认知归纳思维,将统计分析与局部图形的结构分析统一应用于空间剖分后的栅格单元,所得到的分类结果在更细节的层面上取得了更高的正确率。此外,本方法需要设定的参数意义明确,可以通过对参数的设置自适应地满足实验区域和数据尺度的需求。
进一步的研究工作将从理论体系上利用向量剖分法构建识别其他路网模式(放射模式和环模式)的模型;从方法应用上尝试将本文提出的方法应用于道路网选取和更新,以及城市功能区的辅助识别。
-
表 1 风格元素的建模方法
Table 1 Modeling Methods for Style Elements
风格元素 基于概率统计 基于内容解析 基于神经网络 线划 Sobel滤波[55]、图像梯度[39] UNet[47]、拉普拉斯算子[56]、线划金字塔[57]、
整体嵌套边缘检测[58]颜色 均值、方差[21, 59]、直方图[24-26]、概率密度函数[28]、主成分分析[27]、Sobel滤波[29] 图像调色板[30, 60]、期望最大化[61]、颜色分类[33]、概率分布函数[36]、图像分割[34, 35, 62-64]、交互笔触[65-66]、局部线性嵌入[32]、渐进直方图[67]、径向基函数[37]、主导色[31]、局部仿射变换[68-69] Gram矩阵[40, 41, 51-53, 56, 58, 70, 71]、直方图[72]、白化着色变换[54]、极小极大博弈[47-48]、拉普拉斯算子[53] 纹理 图像滤波器[22]、MRF[73]、图像分割[69]、协方差矩阵[74] Gram矩阵[40, 41, 51-53, 56, 58, 70-71]、MRF[45]、感知损失[58, 75-76]、拉普拉斯算子[56]、极小极大博弈[47-49, 77]、直方图[72]、泊松方程[78]、自适应实例正则化[79]、一阶统计量[44, 80]、白化着色变换[54]、条件实例正则化[81]、层实例正则化[82]、中心矩差异[83] 点符号/注记 Gram矩阵[58, 84]、极小极大博弈[85] 图面要素 表 2 矢量和栅格地图风格迁移对比
Table 2 Comparison of Vector and Raster Map Style Transfer
内容 矢量地图风格迁移 栅格地图风格迁移 风格元素 以颜色为主 以纹理为主 内容解析 需要 不需要 制图规则 需要 不需要 可读性 高 低 内容一致性 完全一致 可能不一致 适用场景 地图设计 瓦片加工 参考源 各类视觉艺术作品: 绘画、照片等 遥感图像等 -
[1] 王家耀. 时空大数据时代的地图学[J]. 测绘学报, 2017, 46(10): 1226-1237 https://www.cnki.com.cn/Article/CJFDTOTAL-CHXB201710006.htm Wang Jiayao. Cartography in the Age of Spatio-Temporal Big Data[J]. Acta Geodaetica et Cartographica Sinica, 2017, 46(10): 1226-1237 https://www.cnki.com.cn/Article/CJFDTOTAL-CHXB201710006.htm
[2] 王家耀, 成毅. 论地图学的属性和地图的价值[J]. 测绘学报, 2015, 44(3): 237-241 https://www.cnki.com.cn/Article/CJFDTOTAL-CHXB201503002.htm Wang Jiayao, Cheng Yi. Discussions on the Attributes of Cartography and the Value of Map[J]. Acta Geodaetica et Cartographica Sinica, 2015, 44(3): 237-241 https://www.cnki.com.cn/Article/CJFDTOTAL-CHXB201503002.htm
[3] Andrienko N, Andrienko G, Gatalsky P. Exploratory Spatiotemporal Visualization: An Analytical Review[J]. Journal of Visual Languages & Computing, 2003, 14(6): 503-541
[4] Çöltekin A, Bleisch S, Andrienko G, et al. Persistent Challenges in Geovisualization—A Community Perspective[J]. International Journal of Cartography, 2017, 3(s1): 115-139 https://www.cnki.com.cn/Article/CJFDTOTAL-GGXB202202001.htm [5] Griffin A, Robinson A, Roth R. Envisioning the Future of Cartographic Research[J]. International Journal of Cartography, 2017, 3: 1-8
[6] 郭仁忠, 应申. 论ICT时代的地图学复兴[J]. 测绘学报, 2017, 46(10): 1274-1283 doi: 10.11947/j.AGCS.2017.20170335 Guo Renzhong, Ying Shen. The Rejuvenation of Cartography in ICT Era[J]. Acta Geodaetica et Cartographica Sinica, 2017, 46(10): 1274-1283 doi: 10.11947/j.AGCS.2017.20170335
[7] 孟立秋. 地图学的恒常性和易变性[J]. 测绘学报, 2017, 46(10): 1637-1644 doi: 10.11947/j.AGCS.2017.20170359 Meng Liqiu. The Constancy and Volatility in Cartography[J]. Acta Geodaetica et Cartographica Sinica, 2017, 46(10): 1637-1644 doi: 10.11947/j.AGCS.2017.20170359
[8] 周成虎. 全息地图时代已经来临: 地图功能的历史演变[J]. 测绘科学, 2014, 39(7): 3-8 https://www.cnki.com.cn/Article/CJFDTOTAL-CHKD201407001.htm Zhou Chenghu. The Era of Holographic Maps has Arrived: Historical Evolution of Map Functions[J]. Science of Surveying and Mapping, 2014, 39(7): 3-8 https://www.cnki.com.cn/Article/CJFDTOTAL-CHKD201407001.htm
[9] 艾廷华. 大数据驱动下的地图学发展[J]. 测绘地理信息, 2016, 41(2): 1-7 https://www.cnki.com.cn/Article/CJFDTOTAL-CHXG201602002.htm Ai Tinghua. Development of Cartography Driven by Big Data[J]. Journal of Geomatics, 2016, 41(2): 1-7 https://www.cnki.com.cn/Article/CJFDTOTAL-CHXG201602002.htm
[10] 高俊. 图到用时方恨少, 重绘河山待后生: 《测绘学报》60年纪念与前瞻[J]. 测绘学报, 2017, 46(10): 1219-1225 https://www.cnki.com.cn/Article/CJFDTOTAL-CHXB201710005.htm Gao Jun. The 60 Anniversary and Prospect of Acta Geodaetica et Cartographica Sinica[J]. Acta Geodaetica et Cartographica Sinica, 2017, 46(10): 1219-1225 https://www.cnki.com.cn/Article/CJFDTOTAL-CHXB201710005.htm
[11] 李霖, 应申. 空间尺度基础性问题研究[J]. 武汉大学学报·信息科学版, 2005, 30(3): 199-203 http://ch.whu.edu.cn/article/id/2133 Li Lin, Ying Shen. Fundamental Problem on Spatial Scale[J]. Geomatics and Information Science of Wuhan University, 2005, 30(3): 199-203 http://ch.whu.edu.cn/article/id/2133
[12] 孟立秋. 地图为人人, 人人都制图[J]. 测绘科学技术学报, 2012, 29(5): 313-320 https://www.cnki.com.cn/Article/CJFDTOTAL-JFJC201205003.htm Meng Liqiu. Map Serves Everybody and Everybody Makes Map[J]. Journal of Geomatics Science and Technology, 2012, 29(5): 313-320 https://www.cnki.com.cn/Article/CJFDTOTAL-JFJC201205003.htm
[13] Jing Y C, Yang Y Z, Feng Z L, et al. Neural Style Transfer: A Review[J]. arXiv, 2017, DOI: 1705.04058
[14] Faridul H S, Pouli T, Chamaret C, et al. Colour Mapping: A Review of Recent Methods, Extensions and Applications[J]. Computer Graphics Forum, 2016, 35(1): 59-88
[15] Pitie F. Advances in Colour Transfer[J]. IET Computer Vision, 2020, 14(6): 304-322
[16] Beconytė G, Viliuvienė R. The Concept and Importance of Style in Cartography[J]. Geodesy and Cartography, 2009, 35(3): 82-91
[17] 凌善金. 地图美学[M]. 芜湖: 安徽师范大学出版社, 2010 Ling Shanjin. Map Aesthetics[M]. Wuhu: Anhui Normal University Press, 2010
[18] MacEachren A M. How Maps Work: Representation, Visualization, and Design[M]. New York : Guilford Publications, 2004
[19] Wu M G, Qiao L G. Designing Metaphorical Multivariate Symbols to Optimize Dockless Bike Sharing[J]. The Cartographic Journal, 2022, 17: 1-19
[20] 瓦西里·康定斯基. 论艺术里的精神[M]. 上海: 上海人民美术出版社, 2014 Kandinsky W. On the Spirit in Art[M]. Shanghai : Shanghai People?s Fine Arts Publishing House, 2014
[21] Reinhard E, Adhikhmin M, Gooch B, et al. Color Transfer Between Images[J]. IEEE Computer Graphics and Applications, 2001, 21(5): 34-41
[22] Efros A A, Freeman W T. Image Quilting for Texture Synthesis and Transfer[C]//The 28th Annual Conference on Computer Graphics and Interactive Techniques, New York, USA, 2001
[23] Barnes C, Zhang F L. A Survey of the State-of-the-Art in Patch-Based Synthesis[J]. Computational Visual Media, 2017, 3(1): 3-20
[24] Morovic J, Sun P L. Accurate 3D Image Colour Histogram Transformation[J]. Pattern Recognition Letters, 2003, 24(11): 1725-1735
[25] Neumann L, Neumann A. Color Style Transfer Techniques Using Hue, Lightness and Saturation Histogram Matching[C]//The 1st Eurographics Conference on Computational Aesthetics in Graphics, Visualization and Imaging, Girona, Spain, 2005
[26] Senanayake C R, Alexander D. Colour Transfer by Feature Based Histogram Registration[C]//British Machine Vision Conference, Coventry, UK, 2007
[27] Abadpour A, Kasaei S. A Fast and Efficient Fuzzy Color Transfer Method[C]//The 4th IEEE International Symposium on Signal Processing and Information Technology, Rome, Italy, 2004
[28] Pitie F, Kokaram A C, Dahyot R. N-Dimensional Probability Density Function Transfer and Its Application to Color Transfer[C]//The 10th IEEE International Conference on Computer Vision, Beijing, China, 2005
[29] Xiao X, Ma L. Gradient-Preserving Color Transfer[J]. Computer Graphics Forum, 2009, 28(7): 1879-1886
[30] Greenfield G R, House D H. A Palette-Driven Approach to Image Color Transfer[C]//The 1st Eurographics Conference on Computational Aesthetics in Graphics, Visualization and Imaging, London, UK, 2005
[31] Yoo J D, Park M K, Cho J H, et al. Local Color Transfer Between Images Using Dominant Colors[J]. Journal of Electronic Imaging, 2013, 22(3): 1-11
[32] Zeng K, Zhang R M, Lan X D, et al. Color Style Transfer by Constraint Locally Linear Embedding[C]//The 18th IEEE International Conference on Image Processing, Brussels, Belgium, 2011
[33] Chang Y, Saito S, Uchikawa K, et al. Example-Based Color Stylization of Images[J]. ACM Transactions on Applied Perception, 2005, 2(3): 322-345
[34] Wu F, Dong W, Kong Y, et al. Content-Based Colour Transfer[J]. Computer Graphics Forum, 2013, 32(1): 190-203
[35] Tsai Y, Shen X, Lin Z, et al. Sky is not the Limit: Semantic-Aware Sky Replacement[J]. ACM Transactions on Graphics, 2016, 35(4): 149
[36] Wen C, Hsieh C, Chen B, et al. Example-based Multiple Local Color Transfer by Strokes[J]. Computer Graphics Forum, 2008, 27(7): 1765-1772
[37] Oskam T, Hornung A, Sumner R W, et al. Fast and Stable Color Balancing for Images and Augmented Reality[C]//The 2nd International Conference on 3D Imaging, Modeling, Zurich, Switzerland, 2012
[38] Hertzmann A, Jacobs C E, Oliver N, et al. Image Analogies[C]//The 28th Annual Conference on Computer Graphics and Interactive Techniques, New York, USA, 2001
[39] Zhang W, Cao C, Chen S F, et al. Style Transfer via Image Component Analysis[J]. IEEE Transactions on Multimedia, 2013, 15(7): 1594-1601
[40] Gatys L A, Ecker A S, Bethge M. A Neural Algorithm of Artistic Style[J]. arXiv, 2015, DOI: 1508.06576
[41] Gatys L A, Ecker A S, Bethge M. Image Style Transfer Using Convolutional Neural Networks[C]//IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 2016
[42] Portilla J, Simoncelli E P. A Parametric Texture Model Based on Joint Statistics of Complex Wavelet Coefficients[J]. International Journal of Computer Vision, 2000, 40: 49-70
[43] Li Y H, Wang N Y, Liu J Y, et al. Demystifying Neural Style Transfer[C]//The 26th International Joint Conference on Artificial Intelligence, Melbourne, Australia, 2017
[44] Shen F L, Yan S C, Zeng G. Neural Style Transfer Via Meta Networks[C]//IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 2018
[45] Li C, Wand M. Combining Markov Random Fields and Convolutional Neural Networks for Image Synthesis[C]//IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 2016
[46] Goodfellow I J, Pouget-Abadie J, Mirza M, et al. Generative Adversarial Networks[J]. arXiv, 2014, DOI: 20141406.2661
[47] Isola P, Zhu J Y, Zhou T H, et al. Image-to-Image Translation with Conditional Adversarial Networks[C]//IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 2017
[48] Zhu J Y, Park T, Isola P, et al. Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks[C]//IEEE International Conference on Computer Vision, Venice, Italy, 2017
[49] Kotovenko D, Sanakoyeu A, Ma P C, et al. A Content Transformation Block for Image Style Transfer[C]//IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 2019
[50] Liao J, Yao Y, Yuan L, et al. Visual Attribute Transfer Through Deep Image Analogy[J]. ACM Transactions on Graphics, 2017, 36(4): 1-15
[51] Liao Y S, Huang C R. Semantic Context-Aware Image Style Transfer[J]. IEEE Transactions on Image Processing, 2016, 31: 1911-1923
[52] Ma Z, Lin T, Li X, et al. Dual-Affinity Style Embedding Network for Semantic-Aligned Image Style Transfer[J]. IEEE Transactions on Neural Networks and Learning Systems, 2022, DOI: 10.1109/TNNLS.2022.3143356
[53] Luan F J, Paris S, Shechtman E, et al. Deep Photo Style Transfer[C]//IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 2017
[54] Li Y J, Liu M Y, Li X T, et al. A Closed-Form Solution to Photorealistic Image Stylization[C]//The European Conference on Computer Vision (ECCV), Munich, Germany, 2018
[55] Huang Y C, Tung Y S, Chen J C, et al. An adaptive Edge Detection Based Colorization Algorithm and Its Applications[C]//The 13th Annual ACM International Conference on Multimedia, Hilton, Singapore, 2005
[56] Li S, Xu X, Nie L, et al. Laplacian-Steered Neural Style Transfer[C]//The 25th ACM International Conference on Multimedia, Mountain View, California, USA, 2017
[57] Jing Y C, Liu Y, Yang Y Z, et al. Stroke Controllable Fast Style Transfer with Adaptive Receptive Fields[C]//The European Conference on Computer Vision (ECCV), Munich, Germany, 2018
[58] Cheng M M, Liu X C, Wang J, et al. Structure-Preserving Neural Style Transfer[J]. IEEE Transactions on Image Processing, 2020, 29: 909-920
[59] Welsh T, Ashikhmin M, Mueller K. Transferring Color to Greyscale Images[C]//The 29th Annual Conference on Computer Graphics and Interactive Techniques, San Antonio, Texas, 2002
[60] Frigo O, Sabater N, Demoulin V, et al. Optimal Transportation for Example-Guided Color Transfer[C]//Asian Conference on Computer Vision, Kuala Lumpur, Malaysia, 2015
[61] Tai Y W, Jia J Y, Tang C K. Local Color Transfer via Probabilistic Segmentation by Expectation-Maximization[C]//IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Washington DC, USA, 2005
[62] Dong Y, Xu D. Interactive Local Color Transfer Based on Coupled Map Lattices[C]//The 11th IEEE International Conference on Computer-Aided Design and Computer Graphics, Huayin, China, 2019
[63] Nguyen C H, Ritschel T, Myszkowski K, et al. 3D Material Style Transfer[J]. Computer Graphics Forum, 2012, 31(22): 431-438
[64] Hristova H, le Meur O, Cozot R, et al. Style-Aware Robust Color Transfer[C]//The Workshop on Computational Aesthetics, Beijing, China, 2015
[65] Lischinski D, Farbman Z, Uyttendaele M, et al. Interactive Local Adjustment of Tonal Values[J]. ACM Transactions on Graphics, 2006, 25(3): 646-653
[66] An X B, Pellacini F. User-Controllable Color Transfer[J]. Computer Graphics Forum, 2010, 29(2): 263-271
[67] Pouli T, Reinhard E. Progressive Color Transfer for Images of Arbitrary Dynamic Range[J]. Computers & Graphics, 2011, 35(1): 67-80
[68] Shih Y, Paris S, Durand F, et al. Data-Driven Hallucination of Different Times of Day from a Single Outdoor Photo[J]. ACM Transactions on Graphics, 2013, 32(6): 200
[69] Okura F, Vanhoey K, Bousseau A, et al. Unifying Color and Texture Transfer for Predictive Appearance Manipulation[J]. Computer Graphics Forum, 2015, 34(4): 53-63
[70] Gatys L A, Ecker A S, Bethge M, et al. Controlling Perceptual Factors in Neural Style Transfer[C]//IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 2017
[71] Ulyanov D, Lebedev V, Vedaldi A, et al. Texture Networks: Feed-Forward Synthesis of Textures and Stylized Images[J]. arXiv, 2016, DOI: 603.03417
[72] Risser E, Wilmot P, Barnes C. Stable and Controllable Neural Texture Synthesis and Style Transfer Using Histogram Losses[J. arXiv, 2017, DOI: 1701.08893
[73] Kwatra V, Schödl A, Essa I, et al. Graphcut Textures[J]. ACM Transactions on Graphics, 2003, 22(3): 277-286
[74] Arbelot B, Vergne R, Hurtut T, et al. Local Texture-Based Color Transfer and Colorization[J]. Computers & Graphics, 2017, 62: 15-27
[75] Johnson J, Alahi A, Li F F. Perceptual Losses for Real-Time Style Transfer and Super-Resolution[C]//European Conference on Computer Vision, Amsterdam, the Netherlands, 2016
[76] Chen D D, Yuan L, Liao J, et al. StyleBank: An Explicit Representation for Neural Image Style Transfer[C]//IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 2017
[77] Choi Y, Choi M, Kim M, et al. StarGAN: Unified Generative Adversarial Networks for Multi-domain Image-to-Image Translation[C]//IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 2018
[78] Mechrez R, Shechtman E, Zelnik-Manor L. Photorealistic Style Transfer with Screened Poisson Equation[C]//The British Machine Vision Conference, London, UK, 2017
[79] Huang X, Belongie S. Arbitrary Style Transfer in Real-Time with Adaptive Instance Normalization[C]//IEEE International Conference on Computer Vision, Venice, Italy, 2017
[80] Zhang Y, Zhang Y, Cai W. A Unified Framework for Generalizable Style Transfer: Style and Content Separation[J]. IEEE Transactions on Image Processing, 2020, DOI: 10.1109/TIP.2020.2969081
[81] Choi Y, Uh Y, Yoo J, et al. StarGAN v2: Diverse Image Synthesis for Multiple Domains[C]//IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 2020
[82] Xu W J, Long C J, Wang R S, et al. DRB-GAN: A Dynamic ResBlock Generative Adversarial Network for Artistic Style Transfer[J]. arXiv, 2021, DOI: 2108.07379
[83] Kalischek N, Wegner J D, Schindler K. In the Light of Feature Distributions: Moment Matching for Neural Style Transfer[C]//IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 2021
[84] Atarsaikhan G, Iwana B K, Narusawa A, et al. Neural font Style Transfer[C]//The 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), Kyoto, Japan, 2017
[85] Azadi S, Fisher M, Kim V, et al. Multi-content GAN for Few-Shot Font Style Transfer[C]//IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 2018
[86] Friedmannová L. What Can We Learn from the Masters? Color Schemas on Paintings as the Source for Color Ranges Applicable in Cartography[M]//Heidelberg: Springer, 2009
[87] Christophe S, Hoarau C. Expressive Map Design Based on Pop Art: Revisit of Semiology of Graphics? [J]. Cartographic Perspectives, 2012, 10(73): 61-74
[88] Kang Y H, Gao S, Roth R E. Transferring Multiscale Map Styles Using Generative Adversarial Networks[J]. arXiv, 2019, DOI: 1905.02200
[89] Wu M, Sun Y, Li Y. Adaptive Transfer of Color from Images to Maps and Visualizations[J]. Cartography and Geographic Information Science, 2022, 49(4): 289-312
[90] Bogucka E P, Meng L. Projecting Emotions from Artworks to Maps Using Neural Style Transfer[C]//The International Cartographic Assocication, Tokyo, Japan, 2019
[91] Li Z. Generating Historical Maps from Online Maps[C]//The 27th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, Chicago, IL, USA, 2019
[92] 李雅倩, 孙彦杰, 乔莉鸽, 等. 从绘画到地图的风格迁移适宜性分析[J]. 测绘科学, 2022, 47(7): 176-187 https://www.cnki.com.cn/Article/CJFDTOTAL-CHKD202207023.htm Li Yaqian, Sun Yanjie, Qiao Lige, et al. Suitability Analysis of Style Transfer from Painting to Map[J]. Science of Surveying and Mapping, 2022, 47(7): 176-187 https://www.cnki.com.cn/Article/CJFDTOTAL-CHKD202207023.htm
[93] Hoarau C, Christophe S. Cartographic Continuum Rendering Based on Color and Texture Interpolation to Enhance Photo-Realism Perception[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2017, 127: 27-38
[94] Christophe S, Mermet S, Laurent M, et al. Neural Map Style Transfer Exploration with GANs[J]. International Journal of Cartography, 2022, 8(1): 18-36
[95] Chen X, Chen S, Xu T, et al. SMAPGAN: Generative Adversarial Network-Based Semisupervised Styled Map Tile Generation Method[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021, 59(5): 4388-4406
[96] Song J Q, Li J, Chen H, et al. MapGen-GAN: A Fast Translator for Remote Sensing Image to Map via Unsupervised Adversarial Learning[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2017, 14: 2341-2357
[97] Xu C X, Zhao B. Satellite Image Spoofing: Creating Remote Sensing Dataset with Generative Adversarial Networks[C]//The 10th International Conference on Geographic Information Science, Dagstuhl, Germany, 2018
[98] Bratkova M, Shirley P, Thompson W. Artistic Rendering of Mountainous Terrain[J]. ACM Trans Graph, 2009, 28: 102
[99] Jenny H, Jenny B. Challenges in Adapting Example-Based Texture Synthesis for Panoramic Map Creation: A Case Study[J]. Cartography and Geographic Information Science, 2013, 40: 297-304
[100] Jenny B, Heitzler M, Singh D, et al. Cartographic Relief Shading with Neural Networks[J]. IEEE Transactions on Visualization and Computer Graphics, 2021, 27(2): 1225-1235
[101] Wu M G, Sun Y J, Jiang S J. Adaptive Color Transfer from Images to Terrain Visualizations[J]. arXiv, 2022, DOI: 2205.14908
[102] 鲁道夫·阿恩海姆. 艺术与视知觉[M]. 长沙: 湖南美术出版社, 2008 Arnheim R. Art and Visual Perception[M]. Changsha: Hunan Fine Arts Publishing House, 2008
[103] 沃克尔马·埃瑟斯. 马蒂斯[M]. 上海: 善本, 2021 Essers V. Matisse[M]. Shanghai: Sendpoints, 2021
[104] Kempadoo K A, Mosharov E V, Choi S J, et al. Dopamine Release from the Locus Coeruleus to the Dorsal Hippocampus Promotes Spatial Learning and Memory[J]. The National Academy of Sciences of the United States of America, 2016, 113(51): 14835-14840
[105] Westbrook A, Braver T S. Dopamine does Double Duty in Motivating Cognitive Effort[J]. Neuron, 2016, 89(4): 695-710
[106] Norman D. The Design of Everyday Things: Revised and Expanded Edition[M]. New York: Basic Books, 2013
-
期刊类型引用(4)
1. 邹历,袁飞翔,宋晓伟,曾毅. 基于VFISNet无监督拼接网络的侧扫声呐图像镶嵌. 江西测绘. 2024(02): 39-42 . 百度学术
2. 王爱学,金绍华,刘天阳,李平,吴振磊. 融合航迹和磁罗经信息的侧扫声呐瞬时航向修正. 哈尔滨工程大学学报. 2024(10): 2025-2033 . 百度学术
3. 李雪申,吴永亭,胡俊,豆虎林,李治远. 基于地物不变性的侧扫声纳条带图像匹配方法. 海洋测绘. 2023(02): 6-10 . 百度学术
4. 高飞,王晓,杨敬华,张博宇,周海波,陈佳星. 多条带侧扫声呐图像精拼接方法研究. 科技创新与应用. 2021(05): 1-4 . 百度学术
其他类型引用(9)