Listen: Joshua Brody on Computation is a Liberal Art
In this talk, Assistant Professor of Computer Science Joshua Brody discusses his research in the theory of computation. Brody makes the argument that computational thinking and theory of computation belong in a liberal arts curriculum.
“In our research area, theory of computation, we want to classify the problems by their inherit difficulty. So for a given problem, how much resources are necessary and sufficient," he says. "You give me any computational problem, I want to know the minimal amount of resources that are required to solve the problem, and I want to understand why you can't do better than that.”
Increasing computational power has enabled great advances in science, from mapping the human genome to enabling secure online commerce. At the same time, computational insights have shed new light on subjects including economics, biology, and chemistry.
Brody’s main research area is theoretical computer science and he is particularly interested in communication complexity.
Rich: Hey everyone. So, I'm Rich Wicentowski. I'm in the computer science department here. It's my pleasure today to introduce Joshua Brody. Joshua got his Ph.D. in 2010 from Dartmouth College, and did two post docs. One at St. Claudia University in Beijing, and then one in Denmark. He joined us in 2013. Joshua teaches upper level courses and algorithms and probable sig methods, and teaches many art courses in the introductory sequence. He also teaches many directed readings in his research area on top of his normal teaching load.
Rich: Joshua has taken a leadership position in the college and also in the computer science department, in particular most recently, he's led a team of students in the international collegiate programming contest. Just this past weekend, they made it to the world finals which will be held at the end of March in Portugal. So, hopefully Joshua gets to go to Portugal.
Rich: Today, Joshua is going to be talking to you about his research, which is in the area of communication complexity of lower bounds and other related areas of computer science. Joshua has several papers that he's co-authored with students, and today he's going to tell you why computation is a liberal art. So, please join me in welcoming Joshua.
Joshua: Thank you. Thank you Rich for the introduction. So yeah, I'm going to talk about my general research area, theoretical computer science, and hopefully place it in good context in a liberal arts curriculum. Before I get started with the talk, I wan to give some acknowledgement. Most especially I want to thank Eugene Lang for the faculty fellowship which provided my second semester support. I got a lot of nice things done over my sabbatical last year. I published two different papers. My collaboration with Erick Blate University of Water Lou produced a manuscript which will be published next year that I'm super excited about, and I guess more excitingly, there was a good amount of time to kind of flesh out and refill me research pipeline. So, a lot of small projects that were dormant, I had enough time to finally think about them some more, and prepare them hopefully for student research projects.
Joshua: When I've gone to faculty lectures with people who don't have kids, they always talk about acknowledgements, they got to go on vacations, and they got a dog and they did very nice things. I managed to [inaudible 00:02:43] our second child. That's what we did personally. I wouldn't say he woke up every 20 or 30 minutes in the fall. He just screamed and threshed about every 20 or 30 minutes September through November. I can't imagine what that would've been like if I was actually teaching at the time.
Joshua: So, I just want to go over quickly at a high level what my research program is. My Ph.D. thesis was on communication complexity, which I will tell you about during this talk. More recently, I've been very interested in the applications of communication complexity to other areas, and there's a lot of connections between communication complexity in different areas and computer science. So, even when I haven't been doing communication complexity, it's been a good kind of foot in the door to get into other research areas, especially research with students. So, I'll try to talk about that more towards the end of the talk.
Joshua: So, this talk will be roughly broken into four different pieces. So, I want to start by giving you a motivation, especially if you are say, a humanities professor, and/or you have never taken CS21 in your life. I hope to convince you why studying computation is a worthy goal. I'll give you an overview of what communication complexity is, and a couple examples of how to use communication complexity to gave intuition into other areas of computer science. And then, I'll quickly hit on a couple other research projects that I've done since I've arriving at Swarthmore.
Joshua: Okay. I want to start with this first slide, which I had up as you're all filling in. You must quick one. So, would you rather multiply two really big numbers, or would you rather add two really big numbers?
Joshua: If we did clicker questions, this would be an excellent clicker question for a faculty lecture. Yeah, so almost everyone says add, and we're not going to go into a formal proof why, but you probably have a good sense of intuition even if you haven't taken CS21, why you want to do add. Multiplication is harder, it requires more work. This nugget, this intuition that addition is harder, at a high level, here's the thing I want to talk about this entire talk. If you want a quick sketch one exactly why addition is harder, addition requires more work. With addition, you're just doing kind of a linear scan through the digits and adding one digit next to the other. With the multiplication that you learned in elementary school, you're essentially multiplying every digit against every other digit. As a computer scientist, this requires N squared operations. This one uses n operations, this one is less so it's easier.
Joshua: By the way, there will be some technical details in this talk. I'm kind of going to break convention with general audience talks and go a little bit into low level technical details. But, I want to preface this by saying that almost every low level technical detail, that you don't really need to understand it. What I'd like you to do is kind of have a high level impression. Maybe you'll get convinced that I understand it, and maybe you'll get convinced that there's a good reason why to do this. A little bit of notation is unavoidable, so I've provided this convenient flip chart on the left side. Big O notation is a upward bound if we're using this much computation or less. So if we want solutions, want the number inside the big O to be as small as possible.
Joshua: Okay, so normally we don't actually talk about the number of operations it takes for two different problems. In theoretical computer science, what we're much more likely to do is take one computational problem, and compare different solutions for that computational problem. So, you can think about multiplication and ask yourself if N squared is actually the best thing that we can do, and the answer is no. This happened because of computer science. So, this N squared operation that we did, I would have thought originally that this is [inaudible 00:06:57] times, but they actual had a worse rhythm. This is invented by [Berman Gupta 00:07:04] 630 AD. The reason was because the [inaudible 00:07:08] Romans didn't have a zero, so they couldn't actually do this operation.
Joshua: Industrial computers when invented shortly after world war 2. In the 50s, they came into common use of companies. So, a lot of computer science problems, we had new solutions, new efficient solutions in the 50s and 60s. So, [Karasuba 00:07:30] did multiplication, N to the 1.585, ..., it's an irrational exponent, but that's okay. It's just less than two, more efficient. So very quickly, once we got computers and motivation for doing better multiplication algorithms, we got the run time down very quickly. I log in, again start with N, divide by two, keep doing that until you get down to one. How many times do you divide by two this log in? Much smaller than N. Log of [inaudible 00:07:59], so log of log...
Joshua: Okay, so this big O notation is what all computer scientists use. It's a good way to capture things, because we care about how the computation level scales with the input size. The down side is, it's ignoring some constant factors. So, the constant factors that works here, and maybe one of the reasons that multiplication algorithms weren't invented until computers were invented is because for all practical purposes, for small numbers or small lengths, these new algorithms weren't actually more efficient in the real world. They only get into big computation for larger and larger numbers. But as is very common in computer science, we have the tools, we were just waiting for applications.
Joshua: In the early 80s, this actually happened. So, the world of cryptography was revolutionized in a period of maybe three years in the early 80s, and at a very high level, modern cryptography is based on using math on really big numbers to encrypt efficiently, and at the same time, if someone is trying to take your message and decrypt it, and they're not supposed to, they have to solve a computationally harder problem, or a problem that we expect to be hard. The inverse operations of multiplication is called factoring, and we don't know how to do it as efficiently right now. This application to cryptography is still very much within the realm of computer science. I think this is a really nice theory of computation in sight into a different area.
Joshua: This whole idea behind modern cryptography, and oh by the way, if you've ever used Amazon to purchase anything online, you're using modern cryptography, is based on this notion that some things are easy to compute, and others are hard. [inaudible 00:09:54] multiplication probably a four or five thousand year old open problem. This is still being studied. So, state of the art is done in 2007. Don't ask what Lock Staring is. It's a growing function within, but just really, really slowly. So, it's less than five. So we're saying it's less than the number of particles in the universe. This furor is actually local, he's a professor at Ken State. So, I just think it's really could that you could actually come up with new insights and a better algorithm for something that people have been studying for five thousand years.
Joshua: So, what I want to do next is talk about, before I go into details of my research area, a couple other areas outside of computer science or computational insights have played a role. They've played a role. A lot of biological or physical processes can be modeled in terms of computation right now. I'm actually going to give one example from economics and one example from psychology instead. So, the first example I want to do, this is a computer scientist named Curby Simon. He won a Turing Award in 1975, which is like the Nobel Prize for computer science. He also has studied psychology and economics, and he's the only person who has a Turing award and a Nobel Prize. His economics research that he won the Nobel Prize for was partially based on this idea of [bounded 00:11:20] rationality.
Joshua: Before his research, apparently in economics, the model was generally the humans and human organizations. Everyone just assumed that they did the most optimal thing possible his insight was, this doesn't actually always happen. Sometimes, people just do something that's just good enough. This is economics research, but this is absolutely a computational insight. Why do people not do something that's totally optimal, it's because figuring out the optimal solution is very hard and requires a lot of work. All of this intuition that you had that addition is bigger than multiplication, this is exactly the same intuition here. Coming up with the optimal solution is just hard. We know a lot of optimization problems in computer science, but we don't know actually any efficient way of doing it. So computer science wise, this is kind of clear, but it was Curby Simon who connected it to behavioral analysis I guess.
Joshua: The example from psychology I want to bring a little bit closer to home. So, Gary Swartz was a former professor of psychology here. He did this research on the paradox of choice. If you're a colleague of his, please tell me if I say something totally wrong here. The idea here is, having more choices can decrease happiness, so he did some experiences where you had a big range of pens, you got to choose the one that you liked the best and take home. It turns out, people were less happy with the one they took if they were offered more pens. To me, this computational insight ... this insight is computational also in a very similar way of finding work. It takes a little bit of work to determine which one you like the best, and if you've got more choices, than that's more work.
Joshua: From a computer science perspective, I would have bene very interested in understanding how exactly this scales as a function of N. So, you've got different communication, sorry, different computational processes that we understand. So to figure out which one the best is, are actually comparing all pairs? We know that cost them spare time. Maybe you're doing good sorting algorithms and log in, or maybe you just have to do a scan over it, and figure which one is best. Computer scientists know exactly the computational complexity of these three things.
Joshua: Okay, so I want to kind of go back into my actual research right now. I'm assuming that you haven't studied communication complexity before, unless you're Lila, but I'm going to give you a high level overview of what it is, a couple of examples, and then we'll talk about applications of it too. So just in general, very broadly, our research area, theory of computation, we want to classify the problems by their inherit difficulty. So for a given problem, how much resources are necessary and sufficient. I view this as one research problem, and this is the one that fascinates me the most. You give me any computational problem, I want to know the minimal amount of resources that are required to solve the problem, and I want to understand why you can't do better than that.
Joshua: So these two questions, resources are necessary and sufficient, end up being different sub-fields within [Theory CS 00:14:49]. Algorithm design is sufficient resources. So, if you want to show that only these many resources are sufficient, you just design a solution to the algorithm to this problem. You analyze it, pat yourself on the back, move on with your day. The flip side to this, how many resources are necessary, traditionally are considered much harder because you can't just come up with a solution. You need to show that no matter what you do, no matter how you tried to solve this problem, you need at least these much resources. So, I view these as two sides of the same coin. The rest of the talk will be about the second one though. There's a very good reason for that. It's because that's where my competitive advantage is. So, if your students here eventually do research, you will start to have lots of interests. You may or may not progress in every single area. It just so happens that lower bounds are where I get the results.
Joshua: Okay, so the research I've done is generally lower bounds. Ideally, I'd like to get to numeric tools that capture lower bounds in lots of different areas. Communication complexity lets me do that, and we'll discuss how to do that soon. So, the lower bounds I really care about most of all right now are ones that are informative on real world computational processes, especially ones where we've got a massive amount of data, more than we're able to handle, and how do we handle that. So ideally, I'd want to work with other researchers who care about lower bounds, and show how to prove lower bounds for their problems.
Joshua: Here's a general communication complexity model. Again, don't focus on the technical details, stay at the high level. garden variety problem. We've got two different people. They each have inputs and they want to compute some function that's on both of their inputs. Their goal is to do it in the most efficient way possible, and our goal is studying communication complexities and understand exactly how much communication is required to solve these problems. So, I'll give you a couple examples. First one's equality, two players are Alice and Bob. It's almost always Alice and Bob, that's just our convention. So, we can have Alice and Bob here. They each have embed streams. Again, they could each have numbers, and they want to see if they're equal.
Joshua: Computer scientists like bits, so just by default, the inputs are in bit streams, and we want to do it using the minimal amount of communication. So, as a baseline, one easy thing that they can do, Alice can send her entire input to Bob. Part of the reason why they need communication is they each only have half of the input. So without communication, they don't know enough to compute this function. But, we could it where Alice sends her entire input to Bob, Bob now has X and Y, looks at them, sees that they're equal, and then maybe sends one bit back saying whether or not they're equal.
Joshua: It turns out, this is the best you can do. So, any kind of communication protocol which computes the equality has to send influx one bit. I'm not going to go into details on that proof, but you can ask me offline. The catch is that it's as long as they're deterministic. So, if they have no source of randomness. But, if they're allowed to choose what messages they send based on some probability distribution or some sort of randomness, then they can actually do this with one percent error using eight bits. This is one of the few communications problems where this actually gets used all the time. So, one garden variety of this is maybe you have a Netflix account, you want to download a movie, those are typically big, huge files. You would like to know if this big, huge file gets corrupted as it's being downloaded. So typically what they might do is some sort of short equality test at the end to be confident that what you got is actually what it's supposed to be.
Joshua: Another example, [inaudible 00:19:17], I'm going to switch stuff up right now. I was hoping both of you would be here, but that's okay. So, Provost Lilly Labretton recently became provost. All of a sudden, she's very busy. Rich wants to set up a meeting with her, and they can only set up a meeting when they both have free time. Provost Lilly Labretton is very busy, and she knows that Rich has a reputation for talking a lot. So, they don't want to just take their entire calendar of open slots for the semester and exchange them. Ideally, they would do this in a way that's very communication efficient. So, they're not just spending their entire semester talking about finding an empty slot.
Joshua: So, they just want to send some communication. One trivial thing that they can do is, Rich can talk a lot and send his entire free schedule. Provost Lilly Labretton can find the schedules, find a free slot, and schedule the meeting. It turns out this is essentially the best thing that they can do. They can't do any better even if they have randomization. This particular problem [inaudible 00:20:37] is known as the queen of communication complexity. This might seem super easy right now, and this last paper from 2004 actually gave an intuitively understanding of why you need as many communications as you have slots.
Joshua: It turns out, you do need to say a little bit about Wednesday 9-10, and a little bit about Wednesday 10-11, and every single possible free slot. Rich Provost Lilly Labretton do potentially need to exchange some information just to determine whether that's a particular free slot they have. Nevertheless, this was 25 years of dense research before we got a complete understanding of it. It's also called the Queen of Communication Complexity because it's used all over the place.
Joshua: Okay. I want to go into how these communication lower rounds get used. These might seem like two examples, equality and [inaudible 00:21:34], are not. They're extremely useful for these applications. Every single one of these applications, and by application, I mean how to use communication complexity to show that some other area is also hard. Every single one of these applications is going to work in exactly the same way. If you've been spacing out a little bit because of some technical detail, now is the time to stop spacing out for just a second. This is maybe the most important slide of the talk.
Joshua: Every single one of these lower rounds, every single one of these hardness results is going to happen in exactly the same way. Three steps: we're going to prove a lower bound for communication problems, we're going to show maybe an unreasonably efficient solution to your favorite problem, to construct a protocol for the communication problem, and step three takes these two steps together and concludes that you can't have an unreasonable solution to your problem. Again, because it's super important, I'll spell it out more directly.
Joshua: Step one, you're proving a lower bound on your communication problem. Step two, you're showing how to construct a solution to your communication ... to your favorite problem ... sorry, tripping over my words. Step two, you're building a solution to the communication problem using some solution to your particular problem. Step three just smushes these inequalities together. This is how we're going to prove lower bounds every single time. A big huge win here, eve if you don't care about communication complexity, is communication complexity is now amidst your field. So, we know a lot of existing communication problems, and we have a deep understanding of their computational complexity.
Joshua: Quite often, we can take the step one and for free, just take an existing communication lower bound off the shelf and just plug it in. Historically, step two has been relatively easy to do. Step one can be quite difficult. So, you get a big win by recycling existing communication problems. Okay, I'll show you a couple examples. All of these for better or for worse are within the larger computer science engineering domain. But, I'll start with on that's as far from my research area as possible. The original reason for research communication complexity was actually an integrated circuit design. So, the idea here is someone in the engineering department is designing one of these integrated circuits. This is a chip that computes one particular function. The things that they care about when they building these chips is area and time. So, we want these chips to be as fast as possible because the customers want speed.
Joshua: We also want these chips to be as small as possible because it costs the factory money every single time they print out one of these chips. So the smaller it is, the cheaper it is to produce, and the less time it takes to compute the function, the more the customers are happy. So it turns out, for a lot of functions that you might want to design a chip for, the right trade off ends up being whatever the area is times time squared, and here's how you build a communication protocol out of one of these integrated circuit designs. You're going to imagine that you have a chip for this function, and essentially, you're going to give have the chip to Alice and have the chip to Bob.
Joshua: So, you can imagine that this chip exists. You're going to use it to solve whatever communication problem Alice and Bob are going to do, and they're going to emulate the run of this chip computing this function to solve their actual problem. So, there's a lot of different functions where we have the same kind of lower bounds. It's lower bound on area times time squared. From a point of view of a manufacturer who's making these chips, we now know the right trade off. And what this means is, if I want to make this a little bit faster, I have to pay in terms of area. And conversely, if I want to make it cheaper to make these chips and decrease the area, then my time blows up. How much does it blow up? Well, if I decrease the area, it's actually the square root of the time, not so bad. If I try to make it smaller in terms of time, then I actually have to pay a quadratic penalty for space.
Joshua: So, these chip manufacturers cared about these two different resources, they get transferred in the same way into communication. A couple other areas that we know communication lower bounds for, data structures. I see many of you have taken CS35, congratulations, that's the right choice. So, a general data structure problem. We have a bunch of data we want to store. Maybe we don't want to store it in too much space. We want to query this data and answer questions about the data that's being stored. The idea is maybe if we're allowed to have more space, we might be able to cut down on the query time. Maybe if we can afford to spend a little bit of extra work every single time we do a query, we can maybe do it more succinctly in limited space.
Joshua: So, this is again is also communication complexity. We're going to anthropomorphize this in exactly the same way as the VLSI circuits. Essentially, this communication complexity problem is going to model a conversation between the CPU and the memory. This is going to capture the performance of a lot of data structure problems. The third one is related to ITS. So, streaming algorithms, the whole idea behind these streaming algorithms is, you've got way more data that can fit on your machine at one time. So, imagine Joel is trying to monitor the internet traffic of all the internet traffic that's coming in to Swarthmore. We would like to run statistics on that to determine if our network is under attack. So, based on various statistics that we run, we can not only know if someone's trying to hack the Swarthmore network, but a lot of times we can tell exactly what attack they're trying to do.
Joshua: The problem with this is it's too much data to fit into your machine. So, two garden variety lower bounds, to have any hope of computing any statistics, you need both randomization or approximation. Without one of these two, you essentially need to save all of your internet traffic which is hopeless. But when you have both, a lot of interesting things can happen. The same kind of technique works. We're going to chop the internet traffic in two. Alice is going to emulate the streaming algorithm on the first half of the traffic, send the contents of this algorithm in memory to Bob, who's going to finish emulating the routine.
Joshua: This idea is randomization and approximation, both of them being needed. This might seem like it's esoteric, or it's only interesting to people who care about lower bounds. This has huge practical importance to companies that are doing these kinds of algorithms. So, this is a garden variety rule of thumb, it's almost always the case that you need randomness. It's almost always the case that you need approximation. So, companies that are working on trying to do streaming algorithms for their particular problem almost never even think about doing it without randomness and approximation. So, these lower bounds have actually changed how these algorithms are getting developed. It's very informative to what the right answers are. Okay.
Joshua: I've given free applications. There are lots and lots and lots of applications. I want to talk about just one more because I co-invented the technique. So, this is related to property testing. So, property testing algorithms are another thing where we try to handle massive amounts of data. In this kind of algorithm, we have way more data than we can even look at. So, we want to compute some functions on our data without even spending the time to do an entire scan. So sometimes, you might want to do a quick scan, hopefully see if your input has a particular property, and move on with your day. In these examples, the input that you're going to have is some function. You're gonna see if this function has a property or not, without having to compute every single value of this function.
Joshua: So, Charlie's going to make some computations here. In general, this is a pretty hopeless task. To figure out if it has a property or not, you almost always need to compute everything. So, this subfield property testing kind of relaxes this. A lot of times, we don't need to tell if F always has the property or not. We just need to tell whether F has the property versus it's far from having the property. For a lot of computer science applications, the input that you get is noisy anyway. So, it makes sense to try to distinguish just whether it has the property versus far from having the property. If you want to think about this at a very high level, we've given ourselves some wiggle room, and hopefully this wiggle room is enough that we can do a quick sketch super efficiently.
Joshua: So I want to give one specific example of a function property that you might care about. This function ... this property is called K Tardy, so this F is K Tardy if it's the X or some of the bits, K of the bits. This X or function, if you haven't seen it before, I'm going to take K of the N and put bits, just count the number of ones, and if it's odd, I'm going to say the value's one, the output's one, otherwise I would say the output's zero. This has been studied in the sub-linear time algorithms community before the old lower bound with square root came. The new lower bound is going to be K.
Joshua: I see at this point that there are a significant number of blank stairs. Congratulations, you are at the right point in this talk. Should you really care about this particular property or sub-linear time algorithms? Maybe not, but I will tell you that the sub-linear time algorithms is a very active subfield of property testing, and the people in this research community care about lower bounds a lot. So, a lot of their research, maybe even half of their research, is in proving lower bound on how much computation is necessary. So, even if you don't care about it, the people who actually do this work do, and it turns out before our work in 2012 or so, they were using a very, essentially a very old technology for proving their lower bounds. So, we're going to do it using the newer technology. Do you mind getting my phone and ...
Speaker 4: Do what?
Joshua: Nevermind. My phone's ringing. I'm just going to let it ring. It's okay.
Joshua: So, we're going to read this from a special flavor of this disjointness problem that I already talked about. The input said about. The input said [inaudible 00:33:16] subsets, in this particular case, we're going to promise you that the subsets are just really small. So, you should think of this K over two as small compared to N. It turns out, if we restrict our inputs to having small sets, we can save on communication, but we still get a lower bound of omega K. So, I'm going to build a communication protocol for K disjointness out of this property testing algorithm, and of my perspective, I'm going to do again in exactly the same way we did with the real slide circuits and the data structures, etc.
Joshua: So, here's the game plan. I've got two different sets. First thing they're going to do is, they're going to take their sets and create different functions. So, Alice had three bits here. Her function on X is going to be take bits two, five and nine, and X or them together. Bob is going to do the same thing, and then they're going to create one function, H, which is just the X or of these functions. So, this function H is an K or of a bunch of bits, and bunch of other bits. Now we're going to run a property testing algorithm for K parody on H. Part of the problem is Alice and Bob don't actually know what H is. Alice sees F, Bob sees G, no one sees both. So, they're going to have to communicate every time they want to compute H of X for some input back.
Joshua: So, they're going to run this property tester every time they need to compute H of X, H of Y, Y of Z. They're going to exchange F of X and G of X. So, they're going to go back and forth, emulating the property tester, and eventually if the property tester outputs, they're going to output what the tester outputs. It just so happens, Alice and Bobby has pay over two embassies. If these embassies do not overlap, then when you X or them together, it's an X or of K different embassies. K over two here, K over two here. But, if they intersect, they both have the same bit, and these bits cancel each other out. So if they intersected off, then it's actually not a K parody function. Trust me, that it's actually quite far from K parody.
Joshua: So, for the cost of this protocol, they need to exchange two bits for every time they want to compute H of X. So if the number of computations they do to the number of queries times two, is the cost of this communication protocol. We know this communication protocol must use at least Omega K resources. K resources is necessary. So, if you divide each side by two, now we say that Roughly K resources is necessary to solve K parody. I am sweeping away a lot of hard details under the rug, proving this lower down for K disjointness, involve solving disjointness which is super hard. But none of us needed to do it, we just took this lower bound off the shelf.
Joshua: So, once we had this particular function, my co-author Erik Blate, was a property testing guy, so he knew a lot of property testing functions. Don't focus on the technical details of this slide. This is basically all the functions that we could think of. We applied the same technique, and we got a lot of new lower bounds. So, everything in red here is bigger than what we've known before. Stuff in black is not actually bigger, but especially in a level arts context, I think there's a huge amount of value because our proofs are much simpler than what was known before. So in particular, these two bounds, K parody square rooted up to K, that's pretty good.
Joshua: If you look at these proofs, that's basically five pages of dense math, and I'm a big of fan of math as the next guy, but these kind of proofs you go through from one line to the next, you can verify that this next line is true based on the previous line. So, you go through this entire five pages and at the end, you're like, yes. This result is true, but we don't have a deep understanding of why these results are true. So, not only are communication lower bounds good for getting new lower bounds, if you buy into the fact that we have a deep understanding of these communication lower bounds, then they provide a lot of insight, not into just that a lot of resources, but why.
Joshua: Okay. So, I'm going to take a quick drink of water, and then talk about some more recent research projects. I think communication complexity is awesome. It's a very deep, beautiful mathematical field. If you don't care about that, here's what you care about. This is kind of a virtuous feedback loop. The more communication complexity problems we know, and the deeper understanding it is, the easier it is to get applications in other areas. The more applications we have for lower bounds, the better motivation we have for proving communication lower bounds. The content in this slide freaked me out when I was a beginning grad student. There are a lot of details and a lot of nuance.
Joshua: By the end of the talk, I'm going to talk about why this is actually a good place for undergraduate research, and why it's not something that people should freak out about. Okay. All these tools on the upper left, how do we actually prove communication lower bounds, this is going to be on the scope on this talk, but it's essentially a lot of math. So, a lot of different areas, the math that people have known well, they've been able to use it to prove communication algorithms. So, because of very few number of these communication problems pop up all over the place, I think there's a lot of value in understanding these very few communication problems as deeply as we can. This work that I did with some co-authors could essentially be described as more than you ever, ever wanted to know about the equality problem. It's an extremely fine-grained understanding if you care about the amount of interaction between Alice and Bob. False positive rates, false negative rates, and have tight lower bounds in five or six different parameters.
Joshua: This intersection problem is very related to sub-disjointness, but rather than saying whether or not they're a disjoint, we want to actually output the intersection. It turns out for the cryptography community, this is the right problem to work on. This [inaudible 00:40:19] problem was actually a direct outcome of my research in this property testing world. So, it turns out these lower bounds were off by a little bit. This K lower bound came out last month, but it turns out K disjointness is not actually the right problem to look at. There's a related problem called [Hamming 00:40:43] distance.
Joshua: So, Eric and I, along with his grad student [inaudible 00:40:47], gave timed lower bounds for hamming distance, and established this connection to property testing, but it's only in a special case where the hamming distance is either one or three. We have good reasons for handling that special case. You might have to trust me. This paper came out used totally different techniques, super interesting also. So what I did over my research is called strong direct sum. So, a direct sum kind of problem, maybe you want to compute a function not once, but many different times, and you want to know, if you want to compute it K times, is it going to result in a factor of K blow up in the communication. Or maybe, you can amortize the cost over a lot of different runs.
Joshua: Classic example I was always taught was, you've got just a big room full of potatoes that you have to peel. It's very dull work. Presumably none of us really know how to peel potatoes really well. But, if you've got 1000 of them, you will get better at it over time. So, a direct sum result does probably not hold the question of peeling potatoes. According to a Google search, this image is of a professor who's grading homework assignments. So, if you're a student and you always wanted to ask what professors are feeling when they're grading, especially if they have a large class of maybe 20 or 30 students ... sorry about that. If you want to know what we're feeling when we'll grading lots of homework assignments over and over, this is exactly how we're feeling. No... this is a really [inaudible 00:42:29] for professors. Is grading 25 copies of the same exam going to cost you 25 times the time? Hopefully not. Hopefully you will find that students make the same mistakes over and over, and maybe you can kind of cut down the missed time.
Joshua: But in different computation models, sometimes the direct sum result holds, sometimes it doesn't. Our result actually shows for this randomized query complexity, that computing K copies of it actually results in a bigger than K blow up, which I think is super weird and interesting. So, I'll give you a very quick sketch of what's going on. So, the intuition is, if you're computing F on K different inputs with overall error X one, one thing that you could do is compute each copy individually, but with low error. X over K, so the overall error is most X one. What we wanted to do was show that you had to compute each individual copy with low error.
Joshua: So, we're going to try to take a protocol, or an algorithm for computing K copies, fix all the input and take our one copy input that we need to compute, and just jam it in to somewhere in the survey of K different copies. We'd like to show that we can do this in a way that computing this copy just once, you have to do it with super low error. This log K blow up comes from doing it with low error. So we can't do that, but what we can show is an algorithm for F with low error. But sometimes, the algorithm just cops out and aborts. This abort probability is super high, maybe abort probability one third. But then, we show functions where it's still hard even if you're aborting some of the time.
Joshua: This is what I did on my sabbatical research. So, other research problems, this is kind of the non-communication complexity stuff that I've gotten into because they're related fields. I have a paper I'm very excited about with Mario Sanchez. This multi party point of jumping is a classic candidate hard problem. We don't know what the communication complexity is, but we think it's hard, and there's nice results for it. On the communications side, we said that we found out that this super hard communication problem is just a little bit easier than it used to be. So, we slightly improved the upper bound, but the technique we proved under the hood, there are these random graphs, and there's a lot of stuff you can say by taking a random graph. We had a new twist on it by allowing [inaudible 00:45:18].
Joshua: The work I did with Joe [inaudible 00:45:19]. This became Joe's honor thesis. For a lot of data structures, you might want to do all the memory accesses in advance ahead of time, especially when you've got a system where it's multi core, or the memory's far away. The rounds of interaction between a CPU and the memory can be expensive. We showed that for this classic problem called predecessor search, that there's a huge price to pay un-adaptively. I think just because of time I'm going to skip the last work, project out.
Joshua: Overall summery of the general research program. The minimum amount of resources, what is the minimal amount of resources that are necessarily sufficient to solve a problem? I think these are laws of nature. They're foundational sciences. I hope I've given you at least a taste to show you that these things can be studied rigorously, and they're worth studying rigorously. I'm a big believer in core communication problems. There's a very small number of them that are used for lower bounds all over the place. So, I want to study them as much as possible. And, I want to say just a couple more things from the remaining time on student research. So, I have a lot of success with publishing both with sophomore students and with beginning grad students when I was a post doc, and I think this picture of the landscape of communication complexity with all its nuances is actually a strength, not a weakness, for working with undergrads.
Joshua: I think one of the reasons why ... as long as I'm the person who has the big picture understanding of this, students really don't need to know every single nuance. So, I can give them one open problem on one particular wrinkle of these communication problems, and one particular variant. And, anything for abundant time, like a summer research project, you don't need to have the global perspective of what's going on in the research space. It's totally fine if students believe the professor, that this is an interesting problem and worthy of study. Student you might not know ... you might conceptualize research as I have to do earth shaking, new breakthrough research. A lot of it is more low level, and there's a lot of problems where we know at a high level what's going on with this particular communication problem. But maybe for one particular flavor, we don't have a complete understanding.
Joshua: If the professor does a good job trying to work on developing these problems, incremental progress is totally fine, and I think there's a huge amount of space for different incremental progress here, which is accessible to undergrads, but at the same time, it is legit research that people outside of [Lavart's 00:48:23] community will find valuable and useful to know. So, I do think this takes work for the professor. So, at a high level, on of my goals is to maintain expertise and see the big picture. I think it takes a little bit of effort and work to select really good research problems for undergraduates. But, there's an advantage in the difference between my knowledge and the student's knowledge. I need more knowledge to understand what the big picture is, and why these problems are interesting. But hopefully students can just take one particular, concrete research problem, know what you're doing for that particular problem without knowing the entire space.
Joshua: So, I think that's a good place to stop. Thanks.