We study the approximation of functions from anisotropic Sobolev classes b(WpR([0, 1]d)) and HSlder-Nikolskii classes B(HPr([0, 1]d)) in the Lq ([0, 1]d) norm with q 〈 p in the quantum model of computation. We determine the quantum query complexity of this problem up to logarithmic factors. It shows that the quantum algorithms are significantly better than the classical deterministic or randomized algorithms.
We study the approximation of the integration of multivariate functions in the quantum model of computation. Using a new reduction approach we obtain a lower bound of the n-th minimal query error on anisotropic Sobolev class R(Wpr([0, 1]d)) (r R+d). Then combining this result with our previous one we determine the optimal bound of n-th minimal query error for anisotropic Hblder- Nikolskii class R(H∞r([0,1]d)) and Sobolev class R(W∞r([0,1]d)). The results show that for these two types of classes the quantum algorithms give significant speed up over classical deterministic and randomized algorithms.
We study the approximation of the imbedding of functions from anisotropic and generalized Sobolev classes into L q ([0, 1]d) space in the quantum model of computation. Based on the quantum algorithms for approximation of finite imbedding from L p N to L q N , we develop quantum algorithms for approximating the imbedding from anisotropic Sobolev classes B(W p r ([0, 1] d )) to L q ([0, 1] d ) space for all 1 ? q,p ? ∞ and prove their optimality. Our results show that for p < q the quantum model of computation can bring a speedup roughly up to a squaring of the rate in the classical deterministic and randomized settings.
近年来迁移学习已经引起了越来越广泛的兴趣,签数据以及源领域数据是不同分布的分类问题,且建立一个归纳分类模型对新来的目标数据进行预测.首先分析了直推式迁移学习(transductive transfer learning)中存在的类别比例漂移问题,然后提出归一化的方法使得预测的类别比例接近于实际样本类别比例.更进一步,提出了一种基于混合正则化框架的归纳迁移学习算法.其中包括目标领域分布结构的流形正则化,预测概率的熵正则化,以及类别比例的期望正则化.这个框架被用于从源领域到目标领域学习的归纳模型中.最后,在实际文本数据集上的实验结果表明,提出的归纳迁移学习模型是有效的,同时该模型可以直接对新来的目标数据进行预测.
Analytical study of large-scale nonlinear neural circuits is a difficult task. Here we analyze the function of neural systems by probing the fuzzy logical framework of the neural cells' dynamical equations. Al- though there is a close relation between the theories of fuzzy logical systems and neural systems and many papers investigate this subject, most investigations focus on finding new functions of neural systems by hybridizing fuzzy logical and neural system. In this paper, the fuzzy logical framework of neural cells is used to understand the nonlinear dynamic attributes of a common neural system by abstracting the fuzzy logical framework of a neural cell. Our analysis enables the educated design of network models for classes of computation. As an example, a recurrent network model of the primary visual cortex has been built and tested using this approach.
HU Hong1, LI Su2, WANG YunJiu2, QI XiangLin2 & SHI ZhongZhi1 1 Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100080, China
Nonlinear m-term approximation plays an important role in machine learning, signal processing and statistical estimating. In this paper by means of a nondecreasing dominated function, a greedy adaptive compression numerical algorithm in the best m -term approximation with regard to tensor product wavelet-type basis is pro-posed. The algorithm provides the asymptotically optimal approximation for the class of periodic functions with mixed Besov smoothness in the L q norm. Moreover, it depends only on the expansion of function f by tensor pro-duct wavelet-type basis, but neither on q nor on any special features of f.