当前位置:首页 > 教育 >

霍金是如何讲话的(霍金什么时候开始不能讲话)

来源:原点资讯(www.yd166.com)时间:2024-04-27 11:30:41作者:YD166手机阅读>>

霍金是如何讲话的,霍金什么时候开始不能讲话(1)

霍金是如何讲话的,霍金什么时候开始不能讲话(2)

近日,生命未来研究所(Future of Life Institute)公开发表了一篇关于“暂停大型人工智能实验”(Pause Giant AI Experiments: An Open Letter)的公开信。这封公开信的联名签署人包括马斯克等科创企业家,以及诸多科学家。

生命未来研究所由麻省理工学院高能物理学家泰格马克等人于2014年创建,其使命是促进对“未来乐观图景”的研究,以及“降低人类面临的现存风险”。在两个核心词上,无论是“乐观图景”还是“风险”,都在很大程度上是指由人工智能带来的乐观图景和风险。

马斯克以实际行动投身于多个前沿领域的创新创业,他的使命是加速人类在必要科技领域的科技进步,以便在人类自我毁灭之前发展出逃离地球并能够在更广阔的宇宙中生存的能力。而他更深层的驱动力是,正如他自己所说,让意识在宇宙中得以留存。作为公开信的联名签署人,马斯克表现出对不受控的AI摧毁人类文明的深刻忧虑。然而,这一行为削弱了他那“更深层的驱动力”。因为,很明显,他已经对来自不同主体的意识进行了区别对待——他暗示相比于纯人工智能意识,人类意识(或某种未来的人机混合生化物种的意识)更加需要被保护。

我们无意评判他那与某种“人类中心主义”相关的局限性。实际上,在尚未发现宇宙普遍真理的当下,任何看法的局限性都在所难免。局限性是有限真理的代价。局限的另一面,往往也有某种智慧。

与马斯克以及泰格马克等人的看法相近,霍金,作为生命未来研究所的顾问成员,也表达了对人工智能的未来的复杂感情。6年前,在那个GPT的胚胎都尚不存在的时候,在人们普遍仍对强人工智能充满怀疑的年月,霍金在2017年全球移动互联网大会上发表了震撼人心的的致辞,其中的智慧值得在今天继续品味——或者批判性地品味。

以下是2017GMIC霍金致辞中英文全文:

在我的一生中,我见证了社会深刻的变化。其中最深刻的,同时也是对人类影响与日俱增的变化,是人工智能的崛起。简单来说,我认为强大的人工智能的崛起,要么是人类历史上最好的事,要么是最糟的。我不得不说,是好是坏我们仍不确定。但我们应该竭尽所能,确保其未来发展对我们和我们的环境有利。我们别无选择。我认为人工智能的发展,本身是一种存在着问题的趋势,而这些问题必须在现在和将来得到解决。

霍金是如何讲话的,霍金什么时候开始不能讲话(3)

人工智能的研究与开发正在迅速推进。也许我们所有人都应该暂停片刻,把我们的研究重复从提升人工智能能力转移到最大化人工智能的社会效益上面。基于这样的考虑,美国人工智能协会(AAAI)于2008至2009年,成立了人工智能长期未来总筹论坛,他们近期在目的导向的中性技术上投入了大量的关注。但我们的人工智能系统须要按照我们的意志工作。跨学科研究是一种可能的前进道路:从经济、法律、哲学延伸至计算机安全、形式化方法,当然还有人工智能本身的各个分支。

文明所提产生的一切都是人类智能的产物,我相信生物大脑可以达到的和计算机可以达到的,没有本质区别。因此,它遵循了计算机在理论上可以模仿人类智能,然后超越这一原则。但我们并不确定,所以我们无法知道我们将无限地得到人工智能的帮助,还是被藐视并被边缘化,或者很可能被它毁灭。的确,我们担心聪明的机器将能够代替人类正在从事的工作,并迅速地消灭数以百万计的工作岗位。

霍金是如何讲话的,霍金什么时候开始不能讲话(4)

在人工智能从原始形态不断发展,并被证明非常有用的同时,我也在担忧创造一个可以等同或超越人类的事物所导致的结果:人工智能一旦脱离束缚,以不断加速的状态重新设计自身。人类由于受到漫长的生物进化的限制,无法与之竞争,将被取代。这将给我们的经济带来极大的破坏。未来,人工智能可以发展出自我意志,一个与我们冲突的意志。尽管我对人类一贯持有乐观的态度,但其他人认为,人类可以在相当长的时间里控制技术的发展,这样我们就能看到人工智能可以解决世界上大部分问题的潜力。但我并不确定。

2015年1月份,我和科技企业家埃隆马斯克,以及许多其他的人工智能专家签署了一份关于人工智能的公开信,目的是提倡就人工智能对社会所造成的影响做认真的调研。在这之前,埃隆马斯克就警告过人们:超人类人工智能可能带来不可估量的利益,但是如果部署不当,则可能给人类带来相反的效果。我和他同在生命未来研究所的科学顾问委员会,这是一个为了缓解人类所面临的存在风险的组织,而且之前提到的公开信也是由这个组织起草的。这个公开信号召展开可以阻止潜在问题的直接研究,同时也收获人工智能带给我们的潜在利益,同时致力于让人工智能的研发人员更关注人工智能安全。此外,对于决策者和普通大众来说,这封公开信内容翔实,并非危言耸听。人人都知道人工智能研究人员们在认真思索这些担心和伦理问题,我们认为这一点非常重要。比如,人工智能是有根除疾患和贫困的潜力的,但是研究人员必须能够创造出可控的人工智能。那封只有四段文字,题目为《应优先研究强大而有益的人工智能》的公开信,在其附带的十二页文件中对研究的优先次序作了详细的安排。

在过去的20年里,人工智能一直专注于围绕建设智能代理所产生的问题,也就是在特定环境下可以感知并行动的各种系统。在这种情况下,智能是一个与统计学和经济学相关的理性概念。通俗地讲,这是一种做出好的决定、计划和推论的能力。基于这些工作,大量的整合和交叉孕育被应用在人工智能、机器学习、统计学、控制论、神经科学、以及其它领域。共享理论框架的建立,结合数据的供应和处理能力,在各种细分的领域取得了显著的成功。例如语音识别、图像分类、自动驾驶、机器翻译、步态运动和问答系统。

随着这些领域的发展,从实验室研究到有经济价值的技术形成良性循环。哪怕很小的性能改进,都会带来巨大的经济效益,进而鼓励更长期、更伟大的投入和研究。目前人们广泛认同,人工智能的研究正在稳步发展,而它对社会的影响很可能扩大,潜在的好处是巨大的,既然文明所产生的一切,都是人类智能的产物;我们无法预测我们可能取得什么成果,当这种智能是被人工智能工具放大过的。但是,正如我说过的,根除疾病和贫穷并不是完全不可能,由于人工智能的巨大潜力,研究如何(从人工智能)获益并规避风险是非常重要的。

霍金是如何讲话的,霍金什么时候开始不能讲话(5)

现在,关于人工智能的研究正在迅速发展。这一研究可以从短期和长期来讨论。一些短期的担忧在无人驾驶方面,从民用无人机到自主驾驶汽车。比如说,在紧急情况下,一辆无人驾驶汽车不得不在小风险的大事故和大概率的小事故之间进行选择。另一个担忧在致命性智能自主武器。他们是否该被禁止?如果是,那么自主该如何精确定义。如果不是,任何使用不当和故障的过失应该如何问责。还有另外一些担忧,由人工智能逐渐可以解读大量监控数据引起的隐私和担忧,以及如何管理因人工智能取代工作岗位带来的经济影响。

长期担忧主要是人工智能系统失控的潜在风险,随着不遵循人类意愿行事的超级智能的崛起,那个强大的系统威胁到人类。这样错位的结果是否有可能?如果是,这些情况是如何出现的?我们应该投入什么样的研究,以便更好的理解和解决危险的超级智能崛起的可能性,或智能爆发的出现?

当前控制人工智能技术的工具,例如强化学习,简单实用的功能,还不足以解决这个问题。因此,我们需要进一步研究来找到和确认一个可靠的解决办法来掌控这一问题。

近来的里程碑,比如说之前提到的自主驾驶汽车,以及人工智能赢得围棋比赛,都是未来趋势的迹象。巨大的投入倾注到这项科技。我们目前所取得的成就,和未来几十年后可能取得的成就相比,必然相形见绌。而且我们远不能预测我们能取得什么成就,当我们的头脑被人工智能放大以后。也许在这种新技术革命的辅助下,我们可以解决一些工业化对自然界造成的损害。关乎到我们生活的各个方面都即将被改变。简而言之,人工智能的成功有可能是人类文明史上最大的事件。

但是人工智能也有可能是人类文明史的终结,除非我们学会如何避免危险。我曾经说过,人工智能的全方位发展可能招致人类的灭亡,比如最大化使用智能性自主武器。今年早些时候,我和一些来自世界各国的科学家共同在联合国会议上支持其对于核武器的禁令。我们正在焦急的等待协商结果。目前,九个核大国可以控制大约一万四千个核武器,它们中的任何一个都可以将城市夷为平地,放射性废物会大面积污染农田,最可怕的危害是诱发核冬天,火和烟雾会导致全球的小冰河期。这一结果使全球粮食体系崩塌,末日般动荡,很可能导致大部分人死亡。我们作为科学家,对核武器承担着特殊的责任,因为正是科学家发明了它们,并发现它们的影响比最初预想的更加可怕。

现阶段,我对灾难的探讨可能惊吓到了在座的各位。很抱歉。但是作为今天的与会者,重要的是,你们要认清自己在影响当前技术的未来研发中的位置。我相信我们团结在一起,来呼吁国际条约的支持或者签署呈交给各国政府的公开信,科技领袖和科学家正极尽所能避免不可控的人工智能的崛起。

霍金是如何讲话的,霍金什么时候开始不能讲话(6)

去年10月,我在英国剑桥建立了一个新的机构,试图解决一些在人工智能研究快速发展中出现的尚无定论的问题。利弗休姆智能未来中心是一个跨学科研究所,致力于研究智能的未来,这对我们文明和物种的未来至关重要。我们花费大量时间学习历史,深入去看——大多数是关于愚蠢的历史。所以人们转而研究智能的未来是令人欣喜的变化。虽然我们对潜在危险有所意识,但我内心仍秉持乐观态度,我相信创造智能的潜在收益是巨大的。也许借助这项新技术革命的工具,我们将可以削减工业化对自然界造成的伤害。

我们生活的每一个方面都会被改变。我在研究所的同事休普林斯承认,利弗休姆中心能建立,部分是因为大学成立了存在风险中心。后者更加广泛地审视了人类潜在问题,的重点研究范围则相对狭窄。

人工智能的最新进展,包括欧洲议会呼吁起草一系列法规,以管理机器人和人工智能的创新。令人感到些许惊讶的是,这里面涉及了一种形式的电子人格,以确保最有能力和最先进的人工智能的权利和责任。欧洲议会发言人评论说,随着日常生活中越来越多的领域日益受到机器人的影响,我们需要确保机器人无论现在还是将来,都为人类而服务。向欧洲议会议员提交的报告,明确认为世界正处于新的工业机器人革命的前沿。报告中分析的是否给机器人提供作为电子人的权利,这等同于法人(的身份),也许有可能。报告强调,在任何时候,研究和设计人员都应确保每一个机器人设计都包含有终止开关。在库布里克的电影《2001太空漫游》中,出故障的超级电脑哈尔没有让科学家们进入太空舱,但那是科幻。我们要面对的则是事实。奥斯本克拉克跨国律师事务所的合伙人,洛纳布拉泽尔在报告中说,我们不承认鲸鱼和大猩猩有人格,所以也没有必要急于接受一个机器人人格。但是担忧一直存在。报告承认在几十年的时间内,人工智能可能会超越人类智力范围,人工智能可能会超越人类智力范围,进而挑战人机关系。报告最后呼吁成立欧洲机器人和人工智能机构,以提供技术、伦理和监管方面的专业知识。如果欧洲议会议员投票赞成立法,该报告将提交给欧盟委员会。它将在三个月的时间内决定要采取哪些立法步骤。

我们还应该扮演一个角色,确保下一代不仅仅有机会还要有决心,在早期阶段充分参与科学研究,以便他们继续发挥潜力,帮助人类创造一个更加美好的的世界。这就是我刚谈到学习和教育的重要性时,所要表达的意思。我们需要跳出事情应该如何这样的理论探讨,并且采取行动,以确保他们有机会参与进来。我们站在一个美丽新世界的入口。这是一个令人兴奋的、同时充满了不确定性的世界,而你们是先行者。我祝福你们。

以下为英文全文:

Over my lifetime, I have seen very significant societal changes. Probably one of the most significant, and one that is increasingly concerning people today, is the rise of artificial intelligence.

In short, I believe that the rise of powerful AI, will be either the best thing, or the worst, ever to happen to humanity.

I have to say now, that we do not yet know which. But we should do all we can, to ensure that its future development benefits us, and our environment. We have no other option. I see the development of AI, as a trend with its own problems that we know must be dealt with, now and into the future.

The progress in AI research and development is swift. And perhaps we should all stop for a moment, and focus our research, not only on making AI more capable, but on maximizing its societal benefit.

Such considerations motivated the American Association for Artificial Intelligence's, two thousand and eight to two thousand and nine, Presidential Panel on Long-Term AI Futures, which up to recently had focused largely on techniques, that are neutral with respect to purpose.

But our AI systems must do what we want them to do. Inter-disciplinary research can be a way forward: ranging from economics, law, and philosophy, to computer security, formal methods, and of course various branches of AI itself.

Everything that civilization has to offer, is a product of human intelligence, and I believe there is no real difference between what can be achieved by a biological brain, and what can be achieved by a computer.

It therefore follows that computers can, in theory, emulate human intelligence, and exceed it. But we don’t know. So we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it.

Indeed, we have concerns that clever machines will be capable of undertaking work currently done by humans, and swiftly destroy millions of jobs.

While primitive forms of artificial intelligence developed so far, have proved very useful, I fear the consequences of creating something that can match or surpass humans. AI would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded. It will bring great disruption to our economy.

And in the future, AI could develop a will of its own, a will that is in conflict with ours. Although I am well-known as an optimist regarding the human race, others believe that humans can command the rate of technology for a decently long time, and that the potential of AI to solve many of the world's problems will be realised. I am not so sure.

In January 2015, I, along with the technological entrepreneur, Elon Musk, and many other AI experts, signed an open letter on artificial intelligence, calling for serious research on its impact on society.

In the past, Elon Musk has warned that super human artificial intelligence, is possible of providing incalculable benefits, but if deployed incautiously, will have an adverse effect on the human race.

He and I, sit on the scientific advisory board for the Future of Life Institute, an organization working to mitigate existential risks facing humanity, and which drafted the open letter. This called for concrete research on how we could prevent potential problems, while also reaping the potential benefits AI offers us, and is designed to get AI researchers and developers to pay more attention to AI safety.

In addition, for policymakers and the general public, the letter is meant to be informative, but not alarmist. We think it is very important, that everybody knows that AI researchers are seriously thinking about these concerns and ethical issues.

For example, AI has the potential to eradicate disease and poverty, but researchers must work to create AI that can be controlled. The four-paragraph letter, titled Research Priorities for Robust and Beneficial Artificial Intelligence, an Open Letter, lays out detailed research priorities in the accompanying twelve-page document.

For the last 20 years or so, AI has been focused on the problems surrounding the construction of intelligent agents, systems that perceive and act in some environment. In this context, intelligence is related to statistical and economic notions of rationality. Colloquially, the ability to make good decisions, plans, or inferences.

As a result of this recent work, there has been a large degree of integration and cross-fertilisation among AI, machine learning, statistics, control theory, neuroscience, and other fields. The establishment of shared theoretical frameworks, combined with the availability of data and processing power, has yielded remarkable successes in various component tasks, such as speech recognition, image classification, autonomous vehicles, machine translation, legged locomotion, and question-answering systems.

As development in these areas and others, moves from laboratory research to economically valuable technologies, a virtuous cycle evolves, whereby even small improvements in performance, are worth large sums of money, prompting further and greater investments in research.

There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer, is a product of human intelligence; we cannot predict what we might achieve, when this intelligence is magnified by the tools AI may provide.

But, and as I have said, the eradication of disease and poverty is not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits, while avoiding potential pitfalls.

Artificial intelligence research is now progressing rapidly. And this research can be discussed as short-term and long-term. Some short-term concerns relate to autonomous vehicles, from civilian drones and self-driving cars. For example, a self-driving car may, in an emergency, have to decide between a small risk of a major accident, and a large probability of a small accident.

Other concerns relate to lethal intelligent autonomous weapons. Should they be banned. If so, how should autonomy be precisely defined. If not, how should culpability for any misuse or malfunction be apportioned. Other issues include privacy concerns, as AI becomes increasingly able to interpret large surveillance datasets, and how to best manage the economic impact of jobs displaced by AI.

Long-term concerns, comprise primarily of the potential loss of control of AI systems, via the rise of super-intelligences that do not act in accordance with human wishes, and that such powerful systems would threaten humanity. Are such dystopic outcomes possible.

If so, how might these situations arise. What kind of investments in research should be made, to better understand and to address the possibility of the rise of a dangerous super-intelligence, or the occurrence of an intelligence explosion.

Existing tools for harnessing AI, such as reinforcement learning, and simple utility functions, are inadequate to solve this. Therefore more research is necessary to find and validate a robust solution to the control problem.

Recent landmarks, such as the self-driving cars already mentioned, or a computer winning at the game of Go, are signs of what is to come. Enormous levels of investment are pouring into this technology.

The achievements we have seen so far, will surely pale against what the coming decades will bring, and we cannot predict what we might achieve, when our own minds are amplified by AI.

Perhaps with the tools of this new technological revolution, we will be able to undo some of the damage done to the natural world by the last one, industrialisation. Every aspect of our lyves will be transformed. In short, success in creating AI, could be the biggest event in the history of our civilisation.

But it could also be the last, unless we learn how to avoid the risks. I have said in the past that the development of full AI, could spell the end of the human race, such as the ultimate use of powerful autonomous weapons. Earlier this year, I, along with other international scientists, supported the United Nations convention to negotiate a ban on nuclear weapons.

We await the outcome with nervous anticipation. Currently, nine nuclear powers have access to roughly 14,000 nuclear weapons, any one of which can obliterate cities, contaminate wide swathes of land with radioactive fall-out, and the most horrible hazard of all, cause a nuclear-induced winter, in which the fires and smoke might trigger a global mini-ice age.

The result is a complete collapse of the global food system, and apocalyptic unrest, potentially killing most people on earth. We scientists bear a special responsibility for nuclear weapons, since it was scientists who invented them, and discovered that their effects are even more horrific than first thought.

At this stage, I may have possibly frightened you all here today, with talk of doom. I apologise. But it is important that you, as attendees to today's conference, recognise the position you hold in influencing future research and development of today's technology.

I believe that we join together, to call for support of international treaties, or signing letters presented to individual governmental powers. Technology leaders and scientists are doing what they can, to obviate the rise of uncontrollable AI.

In October last year, I opened a new center in Cambridge, England, which will attempt to tackle some of the open-ended questions raised by the rapid pace of development in AI research. The Leverhulme Centre for the Future of Intelligence, is a multi-disciplinary institute, dedicated to researching the future of intelligence, as crucial to the future of our civilisation and our species. We spend a great deal of time studying history, which let's face it, is mostly the history of stupidity.

So it's a welcome change, that people are studying instead the future of intelligence. We are aware of the potential dangers, but I am at heart an optimist, and believe that the potential benefits of creating intelligence are huge. Perhaps with the tools of this new technological revolution, we will be able to undo some of the damage done to the natural world, by industrialisation.

Every aspect of our lives will be transformed. My colleague at the institute, Huw Price, has acknowledged that the center came about partially as a result of the university’s Centre for Existential Risk. That institute examines a wider range of potential problems for humanity, while the Leverhulme Centre has a more narrow focus.

Recent developments in the advancement of AI, include a call by the European Parliament for drafting a set of regulations, to govern the yooss and creation of robots and AI. Somewhat surprisingly, this includes a form of electronic personhood, to ensure the rights and responsibilities for the most capable and advanced AI.

A European Parliament spokesman has commented, that as a growing number of areas in our daily lyves are increasingly affected by robots, we need to ensure that robots are, and will remain, in the service of humans.

The report as presented to MEPs, makes it clear that it believes the world is on the cusp of a new industrial robot revolution. It examines whether or not providing legal rights for robots as electronic persons, on a par with the legal definition of corporate personhood, would be permissible.

But stresses that at all times, researchers and designers should ensure all robotic design incorporates a kill switch. This didn't help the scientists on board the spaceship with Hal, the malfunctioning robotic computer in Kubrick’s two thousand and one, a Space Odyssey, but that was fiction. We deal with fact. Lorna Brazell, a partner at the multinational law firm Osborne Clarke, says in the report, that we don’t give whales and gorillas personhood, so there is no need to jump at robotic personhood.

But the wariness is there. The report acknowledges the possibility that within the space of a few decades, AI could surpass human intellectual capacity, and challenge the human robot relationship. Finally, the report calls for the creation of a European agency for robotics and AI, that can provide technical, ethical, and regulatory expertise. If MEPs vote in favor of legislation, the report will go to the European Commission, which has three months to decide what legislative steps it will take.

We too, have a role to play in making sure the next generation has not just the opportunity, but the determination, to engage fully with the study of science at an early level, so that they can go on to fulfil their potential, and create a better world for the whole human race.

This is what I meant, when I was talking to you just now about the importance of learning and education. We need to take this beyond a theoretical discussion of how things should be, and take action, to make sure they have the opportunity to get on board. We stand on the threshold of a brave new world. It is an exciting, if precarious,place to be. And you are the pioneers. I wish you well.怅。

END

栏目热文

霍金说话的能力(霍金那么聪明怎么不会说话)

霍金说话的能力(霍金那么聪明怎么不会说话)

(央视财经《财经人物周刊》)他坐着轮椅跳舞快乐如孩子一般,出演各种电视剧,人们笑声不断,三次来华,两次登上了万里长城,出...

2024-04-27 11:39:11查看全文 >>

霍金怎么讲话的(霍金怎么说)

霍金怎么讲话的(霍金怎么说)

知名物理学家史蒂芬·霍金2018年去世,享年76岁。霍金1942年1月8日出生于英国牛津,父亲法兰克是毕业于牛津大学的热...

2024-04-27 11:54:38查看全文 >>

霍金怎么语言交流(霍金是怎样发声的)

霍金怎么语言交流(霍金是怎样发声的)

3月14日,英国物理学家史蒂芬·霍金(Stephen William Hawking)去世,享年76岁。霍金1942年1...

2024-04-27 11:28:03查看全文 >>

霍金怎么出名的(霍金怎么活下去的)

霍金怎么出名的(霍金怎么活下去的)

你是否曾在夜晚卧床休息时,思考宇宙的真谛?宇宙是如何产生的?UFO是否真实存在?我们是否能在某处找到外星生命?将人类移民...

2024-04-27 12:04:53查看全文 >>

霍金能说话吗(霍金是怎样发声的)

霍金能说话吗(霍金是怎样发声的)

他是21世纪最伟大的物理学家之一,同时也被后人称作宇宙之王。他就是我们所熟悉的霍金。有人推测霍金早在1985年已经离世。...

2024-04-27 11:41:32查看全文 >>

霍金是怎么交流的(霍金那么聪明怎么不会说话)

霍金是怎么交流的(霍金那么聪明怎么不会说话)

聚焦信息技术领域 为产业发声导读2018年3月14日,斯蒂芬·威廉·霍金去世,享年76岁。他出生于英国牛津,是英国剑桥大...

2024-04-27 11:28:25查看全文 >>

民初奇人传希水为什么重生了(民初奇人传希水最后结局)

民初奇人传希水为什么重生了(民初奇人传希水最后结局)

在《民初奇人传》这部剧中,希水姑娘可不是一般人,她乃八行之一的易阳行首,特长可以用八个字概括。“制药达人,贩卖可爱”。希...

2024-04-27 12:01:00查看全文 >>

民初奇人传希水重生和华民初相见(民初奇人传希水死了完整版)

民初奇人传希水重生和华民初相见(民初奇人传希水死了完整版)

由爱奇艺出品,上海云风文化传媒有限公司联合出品,陈凯歌监制,杨述、刘坦执导,欧豪、谭松韵、王紫璇领衔主演,秦岚、金士杰等...

2024-04-27 12:05:08查看全文 >>

白切牛肉蘸料酱汁怎么调(白灼牛肉蘸料怎么调酱汁)

白切牛肉蘸料酱汁怎么调(白灼牛肉蘸料怎么调酱汁)

在我们的餐桌上,有一道令人垂涎的美味佳肴,那就是白切牛肉。这道菜肴来自于中国传统的烹饪技艺,将新鲜的牛肉烹制得鲜嫩可口,...

2024-04-27 11:30:25查看全文 >>

干切牛肉的蘸料怎么调(干切牛肉蘸料做法)

干切牛肉的蘸料怎么调(干切牛肉蘸料做法)

干切牛肉配方做法主料:牛腱子肉5000克。腌料:精盐100克、二锅头30克、洋葱末30克、胡萝卜末30克、姜末30克、芹...

2024-04-27 11:52:50查看全文 >>

文档排行