Image matching based on scale invariant feature transform(SIFT) is one of the most popular image matching algorithms, which exhibits high robustness and accuracy. Grayscale images rather than color images are generally used to get SIFT descriptors in order to reduce the complexity. The regions which have a similar grayscale level but different hues tend to produce wrong matching results in this case. Therefore, the loss of color information may result in decreasing of matching ratio. An image matching algorithm based on SIFT is proposed, which adds a color offset and an exposure offset when converting color images to grayscale images in order to enhance the matching ratio. Experimental results show that the proposed algorithm can effectively differentiate the regions with different colors but the similar grayscale level, and increase the matching ratio of image matching based on SIFT. Furthermore, it does not introduce much complexity than the traditional SIFT.
针对传统的超分辨率复原算法边缘保持能力不足,存在振铃效应等问题,提出了基于修正点扩散函数的凸集投影超分辨率复原算法。首先,检测参考图像的边缘;然后,对传统的点扩散函数加一个权值因子进行修正,将点扩散函数分为0°、22.5°、45°、67.5°、90°、112.5°、135°、157.5°8个方向,达到在边缘部分降低点扩散函数作用范围的效果;最后,利用改进的点扩散函数迭代修正参考帧,直到估计灰度值与实际灰度值的误差小到一定范围或达到设定的迭代次数,退出迭代,得到超分辨率复原图像。复原图像的质量采用峰值信噪比、均方误差和结构相似度进行评价。实验结果表明,两类测试图像的峰值信噪比提高范围为3.46~6.91 d B、均方误差降低范围为43.47~87.82、结构相似度提高范围为0.050 8~0.381 7。提高了超分辨率复原的边缘保持能力和复原图像的质量。
With the development of three-dimensional(3D) technology, visual fatigue problems in 3D video have got more attention. In this paper, we combine the human vision characteristics and depth perception theory, and propose a 3D video visual comfort evaluation method on the consistency of accommodation and convergence, which evaluates the visual comfort from the quantitative perspective under different horizontal disparities and viewing distances. The experimental results show that the proposed evaluation method exhibits good consistency with the subjective assessment results.