82 stories
1 follower

A critical juncture for the West

1 Share

As the Kremlin takes the major step of national mobilisation to fight a war of aggression against a nascent Western democracy, and once again repeats nuclear threats against the West as a whole, it would seem that our values and way of life are under threat from an outside actor. And sure enough, we are threatened. But the biggest threat we face at this perilous and delicate historical juncture is not external. Vladimir Putin is a mouse in comparison to what threatens our way of life from within, ostensibly in the name of our very Western values.

Before I begin elaborating on what I have to say, it is important that you understand what I mean by 'the West': despite the name, it's not a geographical location or even an ethnicity, but a system of fundamental values and way of life. For historical and bio-evolutionary reasons, these values and way of life still correlate with particular geographies and ethnicities, but to me this is entirely circumstantial. For instance, one of the most unambiguously Western voices in the media today is Fareed Zakaria. And a disproportionate number of those who threaten the Western way of life today are caucasians born in the Western hemisphere. So no, to be Western is not an ethnicity or a domicile; it is to espouse a system of fundamental values and a way of life.

But what way of life? What fundamental values? It is almost inevitably unfair and inaccurate to summarise the answer to these questions in a simple statement. Yet, with that in mind, I will try: to be Western is to hold the uniqueness of individual expression in the highest regard. For us, people are not mere numbers, anonymous drones or cogs in a sociopolitical machine; people are unique individuals who must be allowed to express themselves in their own way, for each and everyone has something unique and valuable to contribute. And by 'expression' I mean much more than just freedom of speech, although the latter is entailed by it as well: individual expression is about being in the world in our own unique ways. This individual expression is as much embodied in speech as it is in art, philosophy, science, profession, hobbies, relationships, and behaviour in general. Westerners hold as sacred our right to be who we are, and to live life in our own unique ways—as determined by our muses, daimons, souls, or whatever you want to call it—as long as doing so does not infringe on the rights of other individuals to do the same.

Notice that this high regard for individual expression has two corollaries: individual liberty and social tolerance. To be able to express ourselves in our own unique ways we must have the freedom, enshrined in laws and institutions, to do so. And because others have the same right to express their unique selves as we do, it is incumbent on all of us to tolerate the choices of others (again, as long as they don't infringe on our own liberties).

As such, the fundamental value of individual expression, when shared in a society, implies tolerance for another's tastes, preferences, dispositions, and so forth. For to argue against another's right to self-expression is to argue against one's own right. This way, one overarching, shared value unfolds into a fertile field for the growth of a variety of divergent peculiarities. I may be a heterosexual man disposed to philosophy and science, who enjoys baroque music, but my freedom to express myself in these ways implies tolerance to, say, a homosexual woman who does art for a living and likes to listen to heavy metal (as long as her freedom to be herself does not infringe on my freedom to be myself). This is how the Western way of life works. We celebrate and encourage our differences, for in their complementarities lies our collective strength. The sum-total of our innate natural drives—of what our muses, daimons, souls, inspirations, aspirations, etc., lead us to do in life—produces our culture, our economy, our science, our technology, our art, and everything that makes us a significant force in the world.

Arguably, no country in the world is fully Western, just as no country is fully non-Western. Even the two major nations today that seem to embody the very antithesis of Western values—Russia and China—do grant limited individual freedoms to their citizens. What I am trying to get across is a matter of degree, not of black-and-white pigeonholing.

In this spirit, the important thing to realise is that, in order to properly uphold the fundamental value of individual expression, Western societies must ensure that government is never driven by individual agendas. This may sound contradictory at first, but it surely isn't: when government becomes about one or a few individuals, who then enforce their peculiar dispositions and views on the entire population, liberty and tolerance die; the vibrant colours of individual expression disappear into a dull and grey background of artificial conformity, without the life-force of nature to propel them. The governments of Western societies must, instead, be driven by institutions and the rule of law, which channel and harmonise our distinct individual drives.

And this is why nations like Russia and China, in which one individual becomes the perennial face and driver of government, above institutions and the rule of law, are by and large incompatible with Western values and ways of life. This doesn't necessarily mean that they are a threat to us: it would be supremely arrogant to think that Western values should rule the entire world. Different peoples are entitled to their own value systems; to inherit and shape their own cultures and ways of life, just as we are entitled to ours. But when a sovereign people that chose the Western path—as Ukraine explicitly and overwhelmingly did in 2013 and 2014—is cowardly assaulted by a foreign power, then that foreign power does become a threat to all of us, Westerners.

Yet, neither Russia nor China are the greatest threats to Western values today. That dishonour goes to those among us who, through to the very freedoms granted to them by Western political systems, seek to undermine our values. Those among us who admire and pander to foreign dictators, who seek to emulate the slick, sanitised veneer of authoritarian regimes, who misuse our open political systems for personal gain, who see themselves as being above institutions and the rule of law: those are the true enemies within. Their approach to public service is acid to the Western way of life. They must not be tolerated, for—as philosopher Karl Popper once observed—the one thing that tolerant societies must never tolerate is intolerance itself.

Ironically, these demagogues claim to want to protect our Western values: think of how the extreme right—embodied in e.g. Marie le Pen in France, the Trump/MAGA movement in the USA, and the Hungarian regime of Victor Orbán—leverage precisely their people's anxieties about threats to their traditions and way of life. Yet, their attitudes and actions embody the very antithesis of the values they claim to protect: cults of personality taking precedence over institutions and the rule of law; disregard for the personal liberties and rights of minorities; adopting lies as a matter-of-course way of government (which is precisely what the Russian and Chinese governments do); disregard for objectivity, facts, reason, evidence and coherent argumentation; and so on. How can the West be protected by a psychopathological Trump, who idolises a criminal Putin, and even a deranged Kim? Who repeatedly lies through his teeth without a shimmer of shame? Who uses the (often legitimate) grievances of his base solely to advance his own egomaniacal personal agenda? How can European ways of life be safeguarded by those who want to acquiesce to Russian expansionism? How can the West be protected by elements who regard facts, science, tolerance and thoughtfulness as weaknesses, and who argue by puerile, reason-free, knee-jerk emotionality? These elements are the greatest threats to the West, not Putin or Xi.

But I am an equal-opportunities critic, and so I don't give the so-called 'left' (I use scare quotes here because it is ludicrous to think that everything in politics can be pigeonholed in one of only two categories) a free pass either. For we must try to understand how demagogues in our midst, who constitute the biggest threat to Western values today, have come to gather support precisely from those who are anxious about losing their Western way of life. How on Earth could this happen?

I won't pretend to know the full answer to this question, but I will risk a partial hypothesis: when the legitimate grievances and anxieties of a large segment of the population are systematically dismissed, and even pooh-poohed, by urban elites, people are left with no psychologically tenable alternative but to lend their support to anti-elite demagogues (who, ironically, are often themselves members of the urban elite). This seems to be particularly the case in the USA, where so-called 'liberals' seem to be quick to dismiss and alienate what I will describe as traditional, heartland mentality. The deplorable views of a very few (they are always there, aren't they?) motivate quick and utterly irresponsible generalisations, reflected in the labelling of almost half the country as 'deplorable.' Is this a Western attitude? Does this reflect social tolerance? Reason? Thoughtfulness? Respect for individual expression?

I live in a country where almost half the land is under sea level. These so-called 'polders' are kept dry by the continuous running of pumps—originally powered by windmills—and various other water defences, which are erected and maintained by the collective effort of the population. As such, the Netherlands is a nation where a failure to respect your neighbour's views and reach some form of consensus would swiftly lead to the literal loss of half the country. If we start fighting each other and fail to cooperate, the pumps stop running and we get more than just our feet wet. Western values here are a matter of life and death; literally.

Yet, isn't this also the case across Western societies today? Flooding is just one of many ways a country can be lost. If respect for individual differences isn't achievable, what is the way forward for, say, the USA? Another civil war? Secession? The Russian and Chinese governments would love it, wouldn't they? How do you think they would react to an opportunity like that? Nonetheless, the mere attempt to understand the other side in one's own society seems to be seen today as weakness, even a betrayal of the cause! This is perilous, for it can quickly make the pumps stop running.

We tend to screw things up by going too far in our well-meaning attempts to correct the ills of our time. History is bursting full of examples. For instance, Martin Luther correctly diagnosed the many ills of the Catholic Church of his time and tried to fix them. But soon enough protestantism went so far as to reduce religious service to some form of legal audience. Even priests started dressing like judges. And when the Catholic Church reacted to it and tried to revitalise religion in the form of the counter reformation, we got the Inquisition. How adorable.

Similarly, we go too far in recognising the ills of our society when this recognition leads to generalisations, alienation, and even hate. There is nothing shameful about trying to understand where the other side is coming from. There is nothing treacherous about engaging in dialogue. Maybe new vistas will open, to the surprise of all parties involved. For even the urban literati may have something to learn from rooted heartland mentality. After all, we are never born in a vacuum, without a past and a historical context, without traditions and ancestors, without a relationship with the land under our feet. Realising this for the first time, after years indulging in the superficiality, uprootedness and lack of teleological context of so-called 'liberal' thinking, can be a sobering and very healthy experience.

Let me try to make my point more concrete with a couple of very polemical examples. Like many urbanites, having pondered the question of abortion for a while, I've come to the conclusion that, on final balance, women must have the right to choose. If abortion ultimately proves to be a sin, then it is their responsibility whether to commit the sin, not lawmakers'; for sovereignty over our own bodies must be the red line. However, I do not dismiss the question lightly as a slam dunk, as some of my urbanite peers do; no, an embryo is a life. The day we take lightly the decision to end a life is the day of our doom as a civilised society. The pro-life movement, even if ultimately wrong, is not baseless or deserving of unexamined contempt. Recognising it as such is a precondition to a sane dialogue under the values of a truly Western society.

Immigration is another polemical example. As an urban literati, I am keenly aware of the tremendous boost in value and injection of vitality that our societies and economies stand to gain from motivated, law-abiding, hard-working immigrants. I am also keenly aware of the population bomb that will soon explode under the feet of our affluent Western societies, for the simple reason that—for decades now—we haven't been making enough babies to continue to live as before. As our population ages, we will run out of younger people to nurse us in hospitals when we get sick, deliver our groceries, maintain our houses, and so on. Technology hasn't yet advanced enough for us to replace people with machines for everything that matters. And so I understand the opportunity former German Chancellor Angela Merkel spotted in 2015, when suddenly a million young and healthy Syrians, many of whom well educated, showed up at the gates of Germany (alongside Japan, Germany stands to suffer the most from its coming population implosion). It must have felt like Christmas.

Yet, I was there during that fateful new-year's-eve in 2015, when the behaviour of young male immigrants towards German women scandalised German society. Hence, I take seriously a real, concrete problem that 'liberals' often dismiss, underestimate or overlook: cultural compatibility.

Societies evolve their mechanisms based on the characteristics of the prevailing local culture. In northern Europe—the culture I am most familiar with—social mechanisms are largely based on very high social trust. In Denmark, for instance, it's usual for farmers to build wooden huts next to the nearest road, and then load them with farm produce. They hang a little board showing the prices and place a little cash box on a counter, so people can come and pick up what they need, leaving the proper amount of money behind. The huts are not manned: the whole thing is based on the trust that nobody will steal the money or the produce, and everybody will pay the proper amount.

Another example: until about 20 years ago, Dutch train stations had no gates. You could enter the station from the street, proceed to a platform and then board a train, with nobody checking if you have a ticket. Even during the train trip itself, only very seldom would a conductor ask to see your ticket. And if you didn't have one (because, of course, you just forgot to buy one, or you didn't have time to do it before the train's departure), they would charge you just twice the normal amount for one.

Predictably, changes in the prevailing culture, partly caused by immigration, have led to a new prevailing calculus: it's more economical to never buy a ticket, and pay twice the price in the rare occasions you would be asked for one. And thus, today, Dutch train stations are filled with electronic gates, surveillance and ticket checks.

People used to a traditional culture of social trust profoundly resent these changes. They are robbed of the feeling they previously had, that they live among people they can trust and count on, even if they don't know them personally; and that they are themselves trusted. An impersonal and alienating ethos of suspicion, isolation and antagonism takes over. It violates one's core values, traditions, ancestral ways of life in a manner that hits one hard and deep, for it robs one of social cohesion and coziness. It makes one feel like an alien in one's own country.

The 'liberal' urban literati are often blind to these psychological facts. Liberalisation by the defacement of culture and traditions is hard on heartland people—damn, it's hard on me—and understandably so. We ignore their grievances at our own peril, for a demagogue like Trump will know exactly how to appeal to, and manipulate, precisely those grievances.

Snob elitism, contempt for heartland mentality and tradition, generalisation and alienation, are every bit as antithetical to Western values—to the respect we owe to other people's liberties and peculiarities—as Trumpism and the criminalisation of abortion. The day we collectively realise this, is the day we will cut the lifeline of demagogues like Trump, le Pen, Orbán, and countless others. And as bonus, it will also be the day the Putin's and Xi's of this world will understand that they can't win.

For liberty is not only more vibrant, it is stronger than authoritarianism, as Ukraine is now demonstrating to anyone who cares to watch. It is a geopolitical myth to think of China's or Russia's governing and economic systems as, in any sense whatsoever, stronger than those of 'messy' democracies. China, in fact, has an incredibly fragile economy dependent on massive imports of oil, food and know-how; all of which, in turn, depend on the West (yes, even China's oil imports depend directly on the USA's ability to secure shipping lanes from the middle east to Shanghai and Beijing). Russia, in turn, makes essentially nothing; they have so little economically-relevant know-how that we can dismiss it altogether. All they can do is extract stuff from their ground and ship it through pipelines (made by Germans), for they don't even have the required infrastructure to liquefy gas. All of Russia's cutting-edge wonder weapons, supersonic missiles and the like, depend on imports of Western technology: integrated circuits, software, electronic systems, etc. And so do China's.

Our noisy external rivals are paper tigers, for authoritarianism can never hope to match the strength of a free society's sum-total of individual creativity and drive. They are not the real threats. The real ones are within, internal parasites of the strength nurtured by liberty. Luckily for us, the way to neutralise this threat is to double-down on our values: respect for individual expression and tolerance for the dispositions of others. Should we do this through the mighty tool we call a 'vote,' our way of life will survive.

Read the whole story
Share this story

It’s Time To Rethink The Change Gospel

1 Share
In a nutshell, we are talking about change more, but doing it less. That’s a problem. Managers who want to be seen as change leaders launch too many initiatives. Employees, for their part, get jaded...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Read the whole story
Share this story

Why you can’t rebuild Wikipedia with crypto


Whenever a fresh disaster happens on the blockchain, increasingly I learn about it from the same destination: a two-month old website whose name suggests the deadpan comedy with which it chronicles the latest crises in NFTs, DAOs, and everything else happening in crypto.

Launched on December 14th, Web3 Is Going Just Great is the sort of thing you almost never see any more on the internet: a cool and funny new website. Its creator and sole author is Molly White, a software engineer and longtime Wikipedia contributor who combs through news and crypto sites to find the day’s most prominent scams, schemes, and rug pulls.

Organized as a timeline and presented in reverse chronological order, to browse Web3 Is Going Just Great is to get a sense...

Continue reading…

Read the whole story
Share this story

Exploring mind-bending questions about reality and virtual worlds via The Matrix

1 Share
Virtual worlds might be digital, but they can be as real and meaningful as our physical world, philosopher David Chalmers argues in his new book, <em>Reality+: Virtual Worlds and the Problems of Philosophy.</em>

Enlarge / Virtual worlds might be digital, but they can be as real and meaningful as our physical world, philosopher David Chalmers argues in his new book, Reality+: Virtual Worlds and the Problems of Philosophy. (credit: Aurich Lawson | Getty Images | David Chalmers)

There's a famous scene in The Matrix where Neo goes to see The Oracle. He meets another potential in the waiting room: a young child who seemingly bends a spoon with his mind. Noticing Neo's fascination, he tells him, "Do not try and bend the spoon. That's impossible. Instead, only try to realize the truth." And what is that truth? "There is no spoon," the child says.

The implication is that the Matrix is an illusion, a false world constructed by the machines to keep human beings sedated and docile while their bodies serve as batteries to power the Matrix. But what if this assumption is wrong, and the Matrix were instead just as real as the physical world? In that case, the child would more accurately have said, "Try to realize the truth. There is a spoon—a digital spoon."

That's the central argument of a new book, Reality+: Virtual Worlds and the Problems of Philosophy, by New York University philosopher David Chalmers. The Australian-born Chalmers is perhaps best known for his development in the 1990s of what's known as the hard problem of consciousness. Things like the ability to discriminate, categorize, and react to environmental stimuli; the brain's ability to integrate information; and the difference between wakefulness and sleep can all be explained by identifying an underlying mechanism.

Read 41 remaining paragraphs | Comments

Read the whole story
Share this story

Researchers Build AI That Builds AI | Quanta Magazine

1 Share

Artificial intelligence is largely a numbers game. When deep neural networks, a form of AI that learns to discern patterns in data, began surpassing traditional algorithms 10 years ago, it was because we finally had enough data and processing power to make full use of them.

Today’s neural networks are even hungrier for data and power. Training them requires carefully tuning the values of millions or even billions of parameters that characterize these networks, representing the strengths of the connections between artificial neurons. The goal is to find nearly ideal values for them, a process known as optimization, but training the networks to reach this point isn’t easy. “Training could take days, weeks or even months,” said Petar Veličković, a staff research scientist at DeepMind in London.

That may soon change. Boris Knyazev of the University of Guelph in Ontario and his colleagues have designed and trained a “hypernetwork” — a kind of overlord of other neural networks — that could speed up the training process. Given a new, untrained deep neural network designed for some task, the hypernetwork predicts the parameters for the new network in fractions of a second, and in theory could make training unnecessary. Because the hypernetwork learns the extremely complex patterns in the designs of deep neural networks, the work may also have deeper theoretical implications.

For now, the hypernetwork performs surprisingly well in certain settings, but there’s still room for it to grow — which is only natural given the magnitude of the problem. If they can solve it, “this will be pretty impactful across the board for machine learning,” said Veličković.

Getting Hyper

Currently, the best methods for training and optimizing deep neural networks are variations of a technique called stochastic gradient descent (SGD). Training involves minimizing the errors the network makes on a given task, such as image recognition. An SGD algorithm churns through lots of labeled data to adjust the network’s parameters and reduce the errors, or loss. Gradient descent is the iterative process of climbing down from high values of the loss function to some minimum value, which represents good enough (or sometimes even the best possible) parameter values.

But this technique only works once you have a network to optimize. To build the initial neural network, typically made up of multiple layers of artificial neurons that lead from an input to an output, engineers must rely on intuitions and rules of thumb. These architectures can vary in terms of the number of layers of neurons, the number of neurons per layer, and so on.

One can, in theory, start with lots of architectures, then optimize each one and pick the best. “But training [takes] a pretty nontrivial amount of time,” said Mengye Ren, now a visiting researcher at Google Brain. It’d be impossible to train and test every candidate network architecture. “[It doesn’t] scale very well, especially if you consider millions of possible designs.”

So in 2018, Ren, along with his former University of Toronto colleague Chris Zhang and their adviser Raquel Urtasun, tried a different approach. They designed what they called a graph hypernetwork (GHN) to find the best deep neural network architecture to solve some task, given a set of candidate architectures.

The name outlines their approach. “Graph” refers to the idea that the architecture of a deep neural network can be thought of as a mathematical graph — a collection of points, or nodes, connected by lines, or edges. Here the nodes represent computational units (usually, an entire layer of a neural network), and edges represent the way these units are interconnected.

Here’s how it works. A graph hypernetwork starts with any architecture that needs optimizing (let’s call it the candidate). It then does its best to predict the ideal parameters for the candidate. The team then sets the parameters of an actual neural network to the predicted values and tests it on a given task. Ren’s team showed that this method could be used to rank candidate architectures and select the top performer.

When Knyazev and his colleagues came upon the graph hypernetwork idea, they realized they could build upon it. In their new paper, the team shows how to use GHNs not just to find the best architecture from some set of samples, but also to predict the parameters for the best network such that it performs well in an absolute sense. And in situations where the best is not good enough, the network can be trained further using gradient descent.

“It’s a very solid paper. [It] contains a lot more experimentation than what we did,” Ren said of the new work. “They work very hard on pushing up the absolute performance, which is great to see.”

Training the Trainer

Knyazev and his team call their hypernetwork GHN-2, and it improves upon two important aspects of the graph hypernetwork built by Ren and colleagues.

First, they relied on Ren’s technique of depicting the architecture of a neural network as a graph. Each node in the graph encodes information about a subset of neurons that do some specific type of computation. The edges of the graph depict how information flows from node to node, from input to output.

The second idea they drew on was the method of training the hypernetwork to make predictions for new candidate architectures. This requires two other neural networks. The first enables computations on the original candidate graph, resulting in updates to information associated with each node, and the second takes the updated nodes as input and predicts the parameters for the corresponding computational units of the candidate neural network. These two networks also have their own parameters, which must be optimized before the hypernetwork can correctly predict parameter values.

To do this, you need training data — in this case, a random sample of possible artificial neural network (ANN) architectures. For each architecture in the sample, you start with a graph, and then you use the graph hypernetwork to predict parameters and initialize the candidate ANN with the predicted parameters. The ANN then carries out some specific task, such as recognizing an image. You calculate the loss made by the ANN and then — instead of updating the parameters of the ANN to make a better prediction — you update the parameters of the hypernetwork that made the prediction in the first place. This enables the hypernetwork to do better the next time around. Now, iterate over every image in some labeled training data set of images and every ANN in the random sample of architectures, reducing the loss at each step, until it can do no better. At some point, you end up with a trained hypernetwork.

Knyazev’s team took these ideas and wrote their own software from scratch, since Ren’s team didn’t publicize their source code. Then Knyazev and colleagues improved upon it. For starters, they identified 15 types of nodes that can be mixed and matched to construct almost any modern deep neural network. They also made several advances to improve the prediction accuracy.

Most significantly, to ensure that GHN-2 learns to predict parameters for a wide range of target neural network architectures, Knyazev and colleagues created a unique data set of 1 million possible architectures. “To train our model, we created random architectures [that are] as diverse as possible,” said Knyazev.

As a result, GHN-2’s predictive prowess is more likely to generalize well to unseen target architectures. “They can, for example, account for all the typical state-of-the-art architectures that people use,” said Thomas Kipf, a research scientist at Google Research’s Brain Team in Amsterdam. “That is one big contribution.”

Impressive Results

The real test, of course, was in putting GHN-2 to work. Once Knyazev and his team trained it to predict parameters for a given task, such as classifying images in a particular data set, they tested its ability to predict parameters for any random candidate architecture. This new candidate could have similar properties to the million architectures in the training data set, or it could be different — somewhat of an outlier. In the former case, the target architecture is said to be in distribution; in the latter, it’s out of distribution. Deep neural networks often fail when making predictions for the latter, so testing GHN-2 on such data was important.

Armed with a fully trained GHN-2, the team predicted parameters for 500 previously unseen random target network architectures. Then these 500 networks, their parameters set to the predicted values, were pitted against the same networks trained using stochastic gradient descent. The new hypernetwork often held its own against thousands of iterations of SGD, and at times did even better, though some results were more mixed.

For a data set of images known as CIFAR-10, GHN-2’s average accuracy on in-distribution architectures was 66.9%, which approached the 69.2% average accuracy achieved by networks trained using 2,500 iterations of SGD. For out-of-distribution architectures, GHN-2 did surprisingly well, achieving about 60% accuracy. In particular, it achieved a respectable 58.6% accuracy for a specific well-known deep neural network architecture called ResNet-50. “Generalization to ResNet-50 is surprisingly good, given that ResNet-50 is about 20 times larger than our average training architecture,” said Knyazev, speaking at NeurIPS 2021, the field’s flagship meeting.

GHN-2 didn’t fare quite as well with ImageNet, a considerably larger data set: On average, it was only about 27.2% accurate. Still, this compares favorably with the average accuracy of 25.6% for the same networks trained using 5,000 steps of SGD. (Of course, if you continue using SGD, you can eventually — at considerable cost — end up with 95% accuracy.) Most crucially, GHN-2 made its ImageNet predictions in less than a second, whereas using SGD to obtain the same performance as the predicted parameters took, on average, 10,000 times longer on their graphical processing unit (the current workhorse of deep neural network training).

“The results are definitely super impressive,” Veličković said. “They basically cut down the energy costs significantly.”

And when GHN-2 finds the best neural network for a task from a sampling of architectures, and that best option is not good enough, at least the winner is now partially trained and can be optimized further. Instead of unleashing SGD on a network initialized with random values for its parameters, one can use GHN-2’s predictions as the starting point. “Essentially we imitate pre-training,” said Knyazev.

Beyond GHN-2

Despite these successes, Knyazev thinks the machine learning community will at first resist using graph hypernetworks. He likens it to the resistance faced by deep neural networks before 2012. Back then, machine learning practitioners preferred hand-designed algorithms rather than the mysterious deep nets. But that changed when massive deep nets trained on huge amounts of data began outperforming traditional algorithms. “This can go the same way.”

In the meantime, Knyazev sees lots of opportunities for improvement. For instance, GHN-2 can only be trained to predict parameters to solve a given task, such as classifying either CIFAR-10 or ImageNet images, but not at the same time. In the future, he imagines training graph hypernetworks on a greater diversity of architectures and on different types of tasks (image recognition, speech recognition and natural language processing, for instance). Then the prediction can be conditioned on both the target architecture and the specific task at hand.

And if these hypernetworks do take off, the design and development of novel deep neural networks will no longer be restricted to companies with deep pockets and access to big data. Anyone could get in on the act. Knyazev is well aware of this potential to “democratize deep learning,” calling it a long-term vision.

However, Veličković highlights a potentially big problem if hypernetworks like GHN-2 ever do become the standard method for optimizing neural networks. With graph hypernetworks, he said, “you have a neural network — essentially a black box — predicting the parameters of another neural network. So when it makes a mistake, you have no way of explaining [it].”

Of course, this is already largely the case for neural networks. “I wouldn’t call it a weakness,” said Veličković. “I would call it a warning sign.”

Kipf, however sees a silver lining. “Something [else] got me most excited about it.” GHN-2 showcases the ability of graph neural networks to find patterns in complicated data.

Normally, deep neural networks find patterns in images or text or audio signals, which are fairly structured types of information. GHN-2 finds patterns in the graphs of completely random neural network architectures. “That’s very complicated data.”

And yet, GHN-2 can generalize — meaning it can make reasonable predictions of parameters for unseen and even out-of-distribution network architectures. “This work shows us a lot of patterns are somehow similar in different architectures, and a model can learn how to transfer knowledge from one architecture to a different one,” said Kipf. “That’s something that could inspire some new theory for neural networks.”

If that’s the case, it could lead to a new, greater understanding of those black boxes.

Read the whole story
Share this story

The Case for Backing Up Source Code

1 Share
As enterprise data security concerns grow, security experts urge businesses to back up their GitLab, GitHub, and BitBucket repositories.

Read the whole story
Share this story
Next Page of Stories