热点资讯

你的位置:手机app开发软件多少钱 > 电商app制作费用 > app开发市场价格 真正东说念主工智能

app开发市场价格 真正东说念主工智能


发布日期:2024-07-18 13:59    点击次数:183


福利彩票3D上周三:第2024175期奖号为488。

真正东说念主工智能

小程序开发

NASAC/FMAC

跟着东说念主类生计被东说念主工智能平素渗入,公众接受东说念主工智能的进程越来越高,东说念主工智能的真正运行受到平素的热心。东说念主工智能的“真正性”是在可靠性、安全性、鲁棒性、可讲明性、可控性等开阔宗旨的基础上发展起来的一个新宗旨。在智能系统中,“真正”是指一个智能实体在通达、动态环境下竣当事者义需求的动态经过中,其行径及产生扫尾老是允洽东说念主们的预期。它强调主义与竣事相符,强调行径和扫尾的可讲明性、可量度性和可抑遏性。除此以外,真正智能计较还条件在受到诸如环境影响、外部袭击等动态环境干涉时,仍然大概捏续提供允洽预期的干事。关于东说念主工智能的真正问题,现在国表里辩论尚处于摸索阶段。本次论坛将从表面、步调、工夫、期骗等多个维度先容现在最新的真正工夫,旨在为提高智能系统竖立效力、改善软硬件假想质料、增强智能系统真正、优化竖立经过和抑遏本钱方面提供念念路。论坛将邀请学术界行家先容高水平辩论效果,并开展关联的工夫念念辨商榷。

论坛组织委员会:

李 钦(华东师范大学)

陈仪香(华东师范大学)

日程安排:

时 间:2020年11月21日(星期六)14:00-17:30

地 点:重庆富力沐日旅馆 会议室1

论坛议程:

陈说及嘉宾简介:

1.  张立军:Deep Neural Networks: From Verification to Generalisation

选录:In this talk, we survey major techniques and tools for verifying deep neural networks, in particular, the notion of robustness for specifying safety properties. Moreover, we study the novel concept of weight correlation in deep neural networks and discusses its impact on the networks’ generalisation ability. We argue that weight correlation can improve generalisation of neural networks. Finally, we discuss how it can be intertwined with robustness to get reliable networks.

简介:Lijun Zhang is a research professor at State Key Laboratory of Computer Science, Institute of Software Chinese Academy of Sciences (ISCAS). He is also the director of the Sino-Europe Joint Institute of Dependable and Smart Software at the Institute of Intelligent Software in Guangzhou. Before this he was a postdoctoral researcher at University of Oxford, and then an associate professor at Language-Based Technology section, DTU Compute, Technical University of Denmark. He gained a Diploma Degree and a PhD (Dr. Ing.) at Saarland University. His research interests include: probabilistic model checking, simulation reduction and decision algorithms, abstraction and model checking, learning algorithms, and verification of deep neural networks. He is leading the development of several tools including PRODeep for verifying neural networks, ROLL for learning automata, and the model checker ePMC, previously known as IscasMC.

2.  卜磊:基于无梯度优化的神经汇聚造反样本生成

选录:频年来以图像识别为代表的东说念主工智能系统平素期骗在各个限度。然则,关相关统与模子由于内在不成讲明性,其行径极易收到数据干涉。怎样对关相关统进行造反样本生成,对会通关联模子行径,藏匿风险具有紧迫真谛。咱们面向黑盒图像识别模子的造反样本生成进行了探索与翻新,提倡一种基于无梯度优化的黑盒模子造反样本生成步调,将造反样本生成迂曲为优化问题,并基于智能化步调进行求解,基于此念念路咱们假想与竣事了关联用具DFA,并在公认案例集上佐证其处分智商。

简介:卜磊 南京大学计较机科学与工夫系老师,博士生导师。主要辩论限度是软件工程与形态化步调,包括模子测验工夫,及时混成系统,信息物理交融系统等标的。2010年在南京大学获取计较机博士学位。曾在CMU、MSRA、UTD、FBK等科研机构进行访学与协作辩论。现在已在关联限度紧迫期刊与会议如TCAD、TC、TCPS、TPDS、RTSS、CAV等上发表论文五十余篇。入选中国计较机学会后生东说念主才发展讨论,微软亚洲辩论院铸星讨论,NASAC后生软件翻新奖等。

3.  张民:Accelerating Robustness Verification of Deep Neural Networks with Lazy&Eager Falsification

选录: Bad scalability is one of the challenging problems to the robustness verification of neural networks. In this talk, we will introduce a simple yet effective approach to accelerate the robustness verification by lazy\&eager falsification. In the approach, we divide the robustness verification problem into sub-problems with respect to target labels. Once the property is falsified for a specific label, we can safely conclude that the neural network is non-robust. The falsification is both lazy and eager. Being lazy means that a sub-problem is not falsified unless it has to and being eager means that the sub-problems that are more likely to be falsifiable have higher priority to falsify. We leverage symbolic interval propagation and linear relaxation techniques to determine the order of the sub-problems for falsification. Our approach is orthogonal to, and can be integrated with existing verification techniques. We integrate it with four state-of-the-art verification tools, i.e., MipVerify, Neurify, DeepZ, and DeepPoly. Experimental results show that our approach can significantly improve these tools, up to 200x speedup, when the perturbation distance is set in a reasonable range.

简介:张民于2011年在日本北陆先端科学工夫大学院大学取得博士学位,2011至2014年在JAIST软件考据中心从事博士后辩论责任,app开发市场价格2014年起加入华东师范大学软件工程学院。主要辩论限度包括形态化步调与真正计较表面,将形态化步调期骗于智能系统,物联网,镶嵌式系统的真正考据与分析,关联责任发表在包括ETAPS、DAC、DATE、TCAD、软件学报等会议和期刊上,取得APSEC2019惟一最好论文奖,DCIT2015最好论文奖。曾担任TASE2017、FM4AI2019法子委员会主席,DATE2021,ICFEM2016等海外著明学术会议法子委员,SCP客座剪辑。主捏国度当然科学基金面上样貌,后生样貌,中法海外协作“蔡元培”样貌等样貌。CCF形态化专委会委员。

4.  刘万伟:Verifying ReLU Neural Networks from a Model Checking Perspective

选录:Neural networks, as an important computing model, have a wide application in artificial intelligence (AI) domain. From the perspective of computer science, such a computing model requires a formal description of its behaviors, particularly the relation between input and output. In addition, such specifications ought to be verified automatically. ReLU (rectified linear unit) neural networks are intensively used in practice. In this paper, we present ReLUTemporal Logic (ReTL), whose semantics is defined with respect to ReLUneural networks, which could specify valuerelated properties about the network. We show that the model checking algorithm for the Σ2∪Π2 fragment of ReTL, which can express properties such as output reachability, is decidable in EXPSPACE. We have also implemented our algorithm with a prototype tool, and experimental results demonstrate the feasibility of the presented model checking approach.

简介:Wanwei Liu received his Ph.D degree in National University of Defense Technology in 2009, he is an associated professor in College of Computer Science, National University of Defense Technology. He is a senior member of CCF. His research interests include theoretical computer science (particularly in automata theory and temporal logic), formal methods (particularly in verification), and software engineering. His work has been published on TSE, ICSE, ASE, TACAS, IJCAI. His work acquires Gold prize (1st prize) in TACAS SV-Comp verification tool track multiple times.

5.  薛白:PAC Model Checking of Block-Box Continuous-Time Dynamical Systems

选录:In this talk I will present a model checking approach to finite-time safety verification of black-box continuous-time dynamical systems within the framework of probably approximately correct (PAC) learning. The black-box dynamical systems are the ones, for which no model is given but whose states changing continuously through time within a finite time interval can be observed at some discrete time instants for a given input. The new model checking approach is termed as PAC model checking due to incorporation of learned models with correctness guarantees expressed using the terms error probability and confidence. Based on the error probability and confidence level, our approach provides statistically formal guarantees that the time-evolving trajectories of the black-box dynamical system over finite time horizons fall within the range of the learned model plus a bounded interval, contributing to insights on the reachability of the black-box system and thus on the satisfiability of its safety requirements. The learned model together with the bounded interval is obtained by scenario optimization, which boils down to a linear programming problem.

简介:Dr. Bai Xue is an associate research professor at State Key Laboratory of Computer Science, Institute of Software Chinese Academy of Sciences since November, 2017.  He received the B.Sc. degree in information and computing science from Tianjin University of Technology and Education in 2008, and the Ph.D. degree in applied mathematics from Beihang University in 2014. Prior to joining Institute of Software, he worked as a research fellow in the Centre for High Performance Embedded Systems at Nanyang Technological University from May, 2014 to September, 2015, and as a postdoc in the Department füer Informatik at Carl von Ossietzky Universität Oldenburg from November, 2015 to October, 2017.

6.  孟国柱:深度神经汇聚的安全威逼与袭击执行

选录:频年来以深度学习为代表的东说念主工智能工夫得到连忙发展和实行,但神经汇聚存在的安全错误给社会和个东说念主带来很大的风险和威逼。为了细目深度神经汇聚的错误,咱们开展了一项针对神经汇聚安全威逼的空洞性调研,从模子萃取、模子逆向、模子投毒和造反袭击四个方面谈判常见的袭击工夫以及在多个量化盘算中的对比。在实证分析中发现,由于攻防两头的信息分袂称,在黑盒袭击中袭击者时常需要构造广大样本并与主义模子进行广大交互查询,制约着黑盒袭击的效力。因此咱们接受了四种数据约减工夫来裁减数据的冗余和进步黑盒袭击效力。通过在多个数据集和复杂汇聚上进行测试,取得比SOTA更好的效果。临了凭据实验数据,进一步谈判了模子锤真金不怕火的可讲明性。

简介:孟国柱,2017年博士毕业于新加坡南洋理工大学。于2018年9月加入中国科学院信息工程辩论所任副辩论员。曾获2019年ACM SIGSAC中国科技新星,取得过2018年CCF-A类会议ICSE最好论文奖;关联辩论责任照旧在软件工程和信息安全限度的海外顶级会议和期刊如ICSE,FSE,ASE,ISSTA等发表当先30篇著述。辩论限度包括东说念主工智能系统安全与阴私,软件罅隙分析和检测,挪动期骗安全等。




上一篇:没有了