简介:刑罚目的是报应抑或预防的持久论争,最终在互相借势和依势中走向妥协,并形成融报应和预防于一炉的刑罚目的"一体化"论,其旨意在于既实现预防又实现报应正义。但预防失效引起的反思性思考、西方行为学发展中的现实人有限理性和有限意志论对理性人论祛伪,佐证了预防论实现的必要条件的不自足;犯罪人行为选择的双曲贴现和满意原则对犯罪人行为的牵引,翻转出预防观理性人逻辑前提与现实人行为选择的异曲;监禁刑特殊预防的负效应、预防性刑事立法遭受的诟病及社会对刑罚该当度内报应正义的认同,使刑罚的犯罪预防止于制度构建。刑罚预防逻辑与现实人行为选择背离使刑罚一体化与实践剥离,宣谕了刑罚应基于(潜在)犯罪人的非理性,慎重构建刑罚样态,采取多维预防措施修饬和提升人类理性继而矫正和预防犯罪。
简介:如何在监管人工智能风险与支持人工智能创新之间寻求平衡点,已经成为人工智能治理的核心问题.为缓和监管和创新的紧张关系,各国监管机构逐渐将发端于金融领域的监管沙盒应用于人工智能治理中.人工智能监管沙盒不仅有助于控制技术风险,还避免扼杀人工智能创新,为我国人工智能治理开辟一条具有可行性的新路径.它为尚未投入市场的新型人工智能提供了进行开发和试验的可控环境,在监管机构可控范围内为创新者保留了必要的试错容错空间.相较于传统自上而下的硬性监管、事后监管和严格监管方式,监管沙盒以敏捷与包容审慎的理念对人工智能进行全周期的治理.但其也具有固有局限性并且在运行中可能会遇到实践障碍,需要更合理的制度加以克服.我国人工智能监管沙盒制度应当构建与完善准入与退出机制、沙盒与个人信息保护的协调机制,以及豁免、披露和沟通交流等机制,...Abstract:The main challenge in AI governance today is striking a balance between controlling AI dangers and foster-ing AI innovation.Regulators in a number of nations have progressively extended the regulatory sandbox,which was first implemented in the banking sector,to AI governance in an effort to reduce the conflict between regulation and in-novation.The AI regulatory sandbox is a new and feasible route for AI governance in China that not only helps to man-age the risks of technology application but also prevents inhibiting AI innovation.It keeps inventors'trial-and-error tolerance space inside the regulatory purview while offering a controlled setting for the development and testing of novel AI that hasn't yet bee