Showing posts with label computer assisted markets. Show all posts
Showing posts with label computer assisted markets. Show all posts

Monday, May 29, 2023

Further progress on course allocation, by Budish, Gao, Othman, Rubinstein and Zhang

 Here are some new developments in the course allocation mechanism used initially in Wharton and now elsewhere.  It turns out that strategy-proofness in the (very) large doesn't imply strategyproofness in samples of realistic size, but this seems to be fixable (and profitable manipulations were not easy to find). The paper concludes with some far ranging thoughts on the econ-cs interface.

Practical algorithms and experimentally validated incentives for equilibrium-based fair division (A-CEEI)   by ERIC BUDISH, RUIQUAN GAO, ABRAHAM OTHMAN  AVIAD RUBINSTEIN, and QIANFAN ZHANG

Abstract: "Approximate Competitive Equilibrium from Equal Incomes (A-CEEI) is an equilibrium-based solution concept for fair division of discrete items to agents with combinatorial demands. In theory, it is known that in asymptotically large markets:

•For incentives, the A-CEEI mechanism is Envy-Free-but-for-Tie-Breaking (EF-TB), which implies that it is Strategyproof-in-the-Large (SP-L).

•From a computational perspective, computing the equilibrium solution is unfortunately a computationally intractable problem (in the worst-case, assuming PPAD≠FP).

We develop a new heuristic algorithm that outperforms the previous state-of-the-art by multiple orders of magnitude. This new, faster algorithm lets us perform experiments on real-world inputs for the first time. We discover that with real-world preferences, even in a realistic implementation that satisfies the EF-TB and SP-L properties, agents may have surprisingly simple and plausible deviations from truthful reporting of preferences. To this end, we propose a novel strengthening of EF-TB, which dramatically reduces the potential for strategic deviations from truthful reporting in our experiments. A (variant of ) our algorithm is now in production: on real course allocation problems it is much faster, has zero clearing error, and has stronger incentive properties than the prior state-of-the-art implementation"

Here's an intriguing passage:

"In Section 6 we use our manipulation-finding algorithm in combination with our fast A-CEEI finding algorithm to explore the plausibility of effective manipulations for students bidding in ACEEI. Originally, we had expected that since our mechanism satisfies the EF-TB and SP-L properties, it would at least be practically strategyproof — if even we don’t really understand the way our algorithm chooses among the many possible equilibria, how can a student with limited information learn to strategically bid in such a complex environment? 

"Indeed, in 2 out of 3 schools that we tested, our manipulation-finding algorithms finds very few or no statistically significant manipulations at all. However, when analyzing the 3rd school, we stumbled upon a simple and effective manipulation for (the first iteration of) our mechanism. We emphasize that although the manipulation is simple in hindsight, in over a year of working on this project we failed to predict it by analyzing the algorithm — the manipulation was discovered by the algorithm

"Inspired by this manipulation, we propose a natural strengthening of envy-free (discussed below), which we call contested-envy free. We encode the analogous contested EF-TB as a new constraint in our algorithm (specifically, the integer program for finding optimal budget perturbations). Fortunately, our algorithm is still very fast even with this more elaborate constraint. And, when we re-run our manipulation-finding experiments, we observe that contested EF-TB significantly reduces the potential for manipulations in practice."

...

"Conclusion:  In this work, we give a significantly faster algorithm for computing A-CEEI. Kamal Jain’s famous formulation “if your laptop cannot find it then neither can the market” [Papadimitriou 2007] was originally intended as a negative result, casting doubt on the practical implications of many famous economic concepts because of their worst-case computational complexity results. Even for course allocation, where a heuristic algorithm existed and worked in practice, Jain’s formulation seemed to still bind, as solving A-CEEI involved an intense day-long process with a fleet of high-powered cloud servers operating in parallel. The work detailed in this paper has significantly progressed what laptops can find: even the largest and most challenging real course allocation problems we have access to can now be solved in under an hour on a commodity laptop. 

"This significant practical improvement suggests that the relationship between prices and demand for the course allocation problem—and potentially other problems of economic interest with complex agent preferences and heterogeneous goods—may be much simpler than has been previously believed and may be far more tractable in practice than the worst-case theoretical bounds. Recalling Jain’s dictum, perhaps many more market equilibria can be found by laptops—or, perhaps, Walras’s original and seemingly naive description of how prices iterate in the real world may in fact typically produce approximate equilibria. 

"Our fast algorithm also opens the door for empirical research on A-CEEI, because we can now solve many instances and see how the solution changes for different inputs. We took it in one direction: empirically investigating the incentives properties of A-CEEI for the first time. For course allocation specifically, this faster algorithm opens up new avenues for improving student outcomes through experimentation. For instance, university administrators often want to subsidize some 6 group of students (e.g., second-year MBA students over first-year MBA students), but are unsure how large of a budget subsidy to grant those students to balance equity against their expectations. Being able to run more assignments with different subsidies can help to resolve this issue."

*************

Earlier related posts:

Thursday, April 23, 2009

Sunday, October 4, 2009

Thursday, May 30, 2013

Monday, August 3, 2015

Tuesday, June 9, 2020

Tuesday, April 6, 2021

Cloud Pricing: The Spot Market Strikes Back, by Dierks and Seuken in Management Science

 Sven Seuken writes:

"I just read your blog post about the new paper on economics of cloud computing, which is very interesting. Given that you highlighted the authors' thoughts on why auctions are not used in cloud computing markets, I thought you might be interested in a recent paper coming out of my group, which was just published at Management Science: https://pubsonline.informs.org/doi/10.1287/mnsc.2020.3907

"In our model, we assume that a cloud provider must *always* offer a standard, non-preemptible fixed-price market (because only this satisfies many customers' business needs, which is in line with the arguments that Hummel and Schwarz provide). But we show that a cloud provider can typically increase her profit and create a Pareto improvement for the users by *additionally* selling idle instances on a preemptible spot market (e.g., via an auction).

"Here's the paper and abstract:

Ludwig Dierks , Sven Seuken 
Published Online:25 Feb 2021 https://doi.org/10.1287/mnsc.2020.3907

Abstract: Cloud computing providers must constantly hold many idle compute instances available (e.g., for maintenance or for users with long-term contracts). A natural idea, which should intuitively increase the provider’s profit, is to sell these idle instances on a secondary market, for example, via a preemptible spot market. However, this ignores possible “market cannibalization” effects that may occur in equilibrium as well as the additional costs the provider experiences due to preemptions. To study the viability of offering a spot market, we model the provider’s profit optimization problem by combining queuing theory and game theory to analyze the equilibria of the resulting queuing system. Our main result is an easy-to-check condition under which a provider can simultaneously achieve a profit increase and create a Pareto improvement for the users by offering a spot market (using idle resources) alongside a fixed-price market. Finally, we illustrate our results numerically to demonstrate the effects that the provider’s costs and her strategy have on her profit.

Monday, April 5, 2021

Some economics of providing cloud computing, by Microsoft economists Hummel and Schwarz

 Here's a paper on an aspect of cloud computing by two Microsoft economists. (Microsoft's cloud service is called Microsoft Azure.)  In addition to the capacity question the paper models, it presents a brief, clear overview of the market for cloud computing.

Efficient Capacity Provisioning for Firms with Multiple Locations: The Case of the Public Cloud  by Patrick Hummel∗ and Michael Schwarz*   March 26, 2021

Abstract: This paper presents a model in which a firm with multiple locations strategically chooses capacity and prices in each location to maximize efficiency. We find that the firm provisions capacity in such a way that the probability an individual customer will be unable to purchase the goods the customer desires is lower in locations with greater expected demand. The firm also sets lower prices in larger locations. Finally, we illustrate that if a customer is indifferent between multiple locations, then it is more efficient to place this customer in a location with greater expected demand. These theoretical results are consistent with empirical evidence that we present from a major public cloud provider.


"2.1 Industry Overview

"The cloud computing industry is young, large, and rapidly growing. Although some of the concepts behind the public cloud were developed as early as the 1960s, all modern public clouds first emerged in the 21st century (Foote 2017). Today annual world cloud revenues exceed $250 billion and are expected to grow by another 20% in 2021 (Graham et al. 2020a).

"The public cloud consists of a wide range of services including infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). SaaS involves providing applications such as web-based email and productivity software to a consumer that can be accessed via the Internet. PaaS provides a platform for deploying consumer created applications using the provider’s programming languages, libraries, and tools.

"And IaaS provisions fundamental computing resources such as processing, storage, and network to a consumer that can be used to deploy and run arbitrary software (Mell and Grance 2011)."

...
"2.5 Why Auctions are Not Used
...
"Since cloud providers provision enough capacity to almost always be able to meet demand, if a cloud provider used an auction to sell compute to customers, the final price at the auction would almost always be equal to the reserve price. However, since cloud customers typically have a value per unit of compute that is orders of magnitude higher than the corresponding capacity costs, in the rare event that there was not enough capacity to meet all demand, the final price in an auction would be dramatically higher
than the cloud provider’s costs. Thus, if a cloud provider used an auction to sell compute to customers, there would be a very high probability that all customers could obtain all the compute they wanted at a low price and a low probability that the final price would
be very high.

"There are two problems with this pricing that would make auctions unsuitable in practice. First, using an auction results in a very high amount of uncertainty about the final realized prices. Thus, if either the cloud provider or the cloud customers are at all risk averse, using an auction to set prices will not meet either the cloud provider’s or the cloud customers’ needs.

"Second, under an auction a cloud provider has a far stronger incentive to underinvest in capacity than under a fixed price mechanism. Under a fixed price mechanism, the cloud provider’s revenue can only go down as a result of underinvesting in capacity, as the cloud provider will not be able to service as much demand. But under an auction, underinvesting in capacity will significantly increase a cloud provider’s revenue by increasing the probability that there will not be enough capacity to meet demand, thereby increasing the probability that the final price in the auction will be very high. Thus, using a fixed price mechanism also enables a cloud provider to more credibly commit to provision the efficient amount of capacity. We illustrate these points formally in Appendix A in the
paper."

Monday, November 2, 2020

Ethics of machine learning--an interview with Michael Kearns and Aaron Roth

 Amazon Scholars Michael Kearns and Aaron Roth discuss the ethics of machine learning--Two of the world’s leading experts on algorithmic bias look back at the events of the past year and reflect on what we’ve learned, what we’re still grappling with, and how far we have to go.  By Stephen Zorio



"In November of 2019, University of Pennsylvania computer science professors Michael Kearns and Aaron Roth released The Ethical Algorithm: The Science of Socially Aware Algorithm Design. Kearns is the founding director of the Warren Center for Network and Data Sciences, and the faculty founder and former director of Penn Engineering’s Networked and Social Systems Engineering program. Roth is the co-director of Penn’s program in Networked and Social Systems Engineering and co-authored The Algorithmic Foundations of Differential Privacy with Cynthia Dwork. Kearns and Roth are leading researchers in machine learning, focusing on both the design and real-world application of algorithms.

Their book’s central thesis, which involves “the science of designing algorithms that embed social norms such as fairness and privacy into their code,” was already pertinent when the book was released. Fast forward one year, and the book’s themes have taken on even greater significance.

Amazon Science sat down with Kearns and Roth, both of whom recently became Amazon Scholars, to find out whether the events of the past year have influenced their outlook. We talked about what it means to define and pursue fairness, how differential privacy is being applied in the real world and what it can achieve, the challenges faced by regulators, what advice the two University of Pennsylvania professors would give to students studying artificial intelligence and machine learning, and much more."

Monday, December 23, 2019

Paul Milgrom's Marshall Lectures are now available on video

Auctions are ancient, but the linked auctions Paul talks about in his lectures are stunningly modern, and depend on high powered, thoughtfully deployed, state of the art computation.

"Market Design When Resource Allocation is NP-Hard," in two lectures.
Here they are:

Lecture 1




and Lecture 2:

Tuesday, November 19, 2019

Milgrom Marshall Lectures at University of Cambridge

Paul Milgrom will be giving the 2019-2020 Marshall Lectures at Cambridge today and tomorrow.  Here's a video abstract by Paul:





2019-20 Marshall Lecture by Professor Paul Milgrom

Paul Milgrom is best known for his contributions to the microeconomic theory, his pioneering innovations in the practical design of multi-item auctions, and the extraordinary successes of his students and academic advisees. According to his BBVA Award citation: “Paul Milgrom has made seminal contributions to an unusually wide range of fields of economics including auctions, market design, contracts and incentives, industrial economics, economics of organizations, finance, and game theory.” According to a count by Google Scholar, Milgrom’s books and articles have received more than 90,000 citations. - Professor Milgrom's Personal Site >>

 Professor Paul Milgrom
(Stanford Department of Economics)
will give two lectures on,
"Market Design When Resource Allocation is NP-Hard"

Venue: Lady Mitchell Hall

Tuesday 19th November 2019
5.00pm to 6.00pm
and
Wednesday 20th November 2019
5.00pm to 6.30pm
*********
I'll update when Paul's lectures are available.
(In the meantime, here are my 2013-2014 Marshall Lectures on "Matching Markets and Market Design )
************
Update: Both lectures are now available at the Marshall Lectures site.

Saturday, September 28, 2019

Automatic algorithmic affirmative action, by Ashesh Rambachan and Jonathan Roth

There's been some justified concern that algorithms that make predictions and choices based on previous choices made by humans might replicate the human biases embedded in the historic data.  Below is a paper that points out that the opposite effect could happen as well.

As explained here: "Imagine a college that has historically admitted students using (biased) admissions officers, but switches to an algorithm trained on data for their past students. If the admissions officers unfairly set a higher bar for people from group A, then assuming student performance is fairly measured once students arrive on campus, students from group A will appear to be stronger than students from group B. The learned model will therefore tend to favor students from group A, in effect raising the bar for students from group B."*

Here's the paper itself, and its abstract:

Bias In, Bias Out? Evaluating the Folk Wisdom
Ashesh Rambachan, Jonathan Roth

Abstract: We evaluate the folk wisdom that algorithms trained on data produced by biased human decision-makers necessarily reflect this bias. We consider a setting where training labels are only generated if a biased decision-maker takes a particular action, and so bias arises due to selection into the training data. In our baseline model, the more biased the decision-maker is toward a group, the more the algorithm favors that group. We refer to this phenomenon as "algorithmic affirmative action." We then clarify the conditions that give rise to algorithmic affirmative action. Whether a prediction algorithm reverses or inherits bias depends critically on how the decision-maker affects the training data as well as the label used in training. We illustrate our main theoretical results in a simulation study applied to the New York City Stop, Question and Frisk dataset.
**********

* I'm reminded of the saying "To get the same reward as a man, a woman has to be twice as good.  Fortunately that's not hard..."

Tuesday, April 23, 2019

Ethical algoritms: a recent talk and a forthcoming book

Increasingly, algorithms are decision makers. Here's a recent talk, and a book forthcoming in October, about what we might mean by ethical decision making by algorithms.




And here's the forthcoming book:
 The Ethical Algorithm: The Science of Socially Aware Algorithm Design Hardcover – November 1, 2019
by Michael Kearns (Author), Aaron Roth  (Author)

Monday, March 11, 2019

Cornell celebrates Eva Tardos

In the Cornell Sun:

A Tribute to the Women in Lab Coats, Behind the Microscopes and Computer Screens
By Sophie Reynolds, Catherine Cai, Caroline Chang


"Professor Eva Tardos, Computer Science
Cornell Professor Eva Tardos, computer science, focuses her research on the effects of “selfish users” in networks. A “selfish user” optimizes resource usage for their own benefit like in packet routing, crowdsourcing and bitcoin mining. Selfish optimization by one user can have a negative effect on other users because it could limit access to resources and subsequently slow down processes in their respective areas of a network.
“Understanding the tradeoff between more complicated designs that can mediate effects of selfish users versus a simpler design […] is an area that I have been working on for 20 or so years, and I still find it fascinating,” Tardos said.
Tardos’ research also overlaps heavily with algorithmic game theory. Tardos notes that her research is extremely interdisciplinary and that she actively communicates with Cornell faculty such as Professor David Easley, economics, as well as economics graduate students.
“The graduate course that I last taught, CS 6840: Algorithmic Game Theory, had econ grad students in it and those are the people I often talk to, even after the course.”
Tardos not only collaborates with economists at Cornell, but also with larger scientific communities such as in the Association for Computing Machinery (ACM) Conference on Economics and Computation, a forum to exchange ideas and converse over technical papers.
In addition to her research, Tardos teaches CS 4820: Analysis of Algorithms, a core class in Cornell’s computer science curriculum that focuses on the design and analysis of computer science algorithms.
“The best part of teaching undergraduate students is to teach students a principled way of thinking about algorithms,” Tardos said.
Tardos, a strong advocate for women in computing fields and an advisor for Women In Computing At Cornell, acknowledges that the number of women getting involved in computer science has not steadily risen like in many other STEM fields.
According to Tardos, the 1980’s were a high point for women in computing but after the dot com boom and bust, the number of women in computer science started to drop.
“Fortunately, in recent years that trend has reversed, and we are doing much better in attracting women to the field,” Tardos said.
Cornell is already ahead of the curve in achieving a 1:1 ratio of women to men in engineering disciplines, and Tardos remains very optimistic about the prospect of more women in computing fields entering academia.
Tardos hopes that this generation of women does not underestimate the excitement of being a computer scientist at a research university."

Monday, February 11, 2019

Digital Economy and Inclusive Growth--report from the Luohan Academy

Here's an initial report from the Luohan Academy:

 Digital Technology and Inclusive Growth--Executive Summary

From the mission statement:
"Social scientists in general, including economists, must therefore collaborate to help societies adapt smoothly and fairly to the digital revolution. Two important objectives of the academic community are first, to understand business models and market structures that enable growth and progress, and second, to identify the impact of digitization on individual and social welfare. So far the rapidly increasing scale of digitization has not been followed by a corresponding increase in theoretically grounded empirical research on the rationales,  consequences, and policies of digitization. A well-organized research community could greatly facilitate and speed up such research efforts."

And, as the report makes clear, China is a good place to study ecommerce:

Tuesday, September 11, 2018

Kidney exchange and computation

Here's an article on how computation--broadly characterized as artificial intelligence--has changed kidney transplantation. It gives some historical background on hard decisions, going back to the first dialysis machines and coming forward to kidney exchange, and has some discussion of  fairness

How AI changed organ donation in the US
By Corinne Purtill

"Today, multiple US hospitals run their own paired kidney donation programs. There are also three larger US exchanges that organize kidney chains across hospitals: the United Network for Organ Sharing, the National Kidney Registry, and the Alliance for Paired Kidney Donation. National exchanges are in place in the UK, Canada, and the Netherlands, and paired donations have taken place in hospitals from India to South Africa.
...
"Given the dearth of public education on what “artificial intelligence” actually means, hospitals and exchanges are wary of patients misconstruing the role algorithms play in identifying potential matches, perhaps fearing conjuring images of robots coldly issuing life-or-death edicts.

Machines currently do not decide which kidneys go where. Humans do that. The algorithms in place today can do the math more reliably and at greater scale than humans can, and implement the judgments humans have already made, but they don’t have a contextual understanding of why they are being asked to perform a calculation in the first place.
...
“In economics we talk about impossibility theorems. There are things you might want that are not possible to get,” Roth says. “When you’re allocating scarce resources, you can’t give a kidney to one person without failing to give it to someone else…. Computers will not lift the burden from humans in every respect.”

Wednesday, June 27, 2018

Alibaba forms Luohan Academy to promote research in digital economy

I just returned from a lightning visit to Hanzhou, the home of Alibaba, which is founding a research academy to study the digital economy.  It will be very interesting to see what develops.

Here are some news stories, with pictures:

Alibaba Initiates the Open Research Platform "Luohan Academy"


"HANGZHOU, China, June 27, 2018 /PRNewswire/ -- Alibaba Group Holding Limited ("Alibaba Group") (NYSE: BABA) has advocated the establishment of the "Luohan Academy" ("Academy"), an open research platform with Nobel Laureates and leading international social scientists to address universal challenges faced by societies arising from the rapid development of digital technologies."
*************

阿里巴巴倡议成立罗汉堂 马云:希望罗汉堂为全世界服务
(G translate: Alibaba proposes to establish Luohantang.Ma Yun: I hope Luohantang will serve the world."
Jack Ma,Tom Sargent, Al Roth, Chris Pissarides.June 26, 2018 Hangzhou
***********
update: here's the first photo, annotated (from http://hznews.hangzhou.com.cn/chengshi/content/2018-06/27/content_7026385.htm)


Tuesday, April 17, 2018

Market design as a growing part of computer science

I'm encouraged to see this from Northwestern CS:
SPECIAL QUARTER: DATA SCIENCE & ONLINE MARKETS - APRIL 15 TO JUNE 15

"In recent years many aspects of social engagement has shifted to what can be broadly thought of as online markets. These markets; which include eBay, Uber, Airbnb, Tinder, StubHub, Wikipedia, Amazon’s Mechanical Turk, etc.; are run by technology companies and are built by teams of software engineers. The science of designing and optimizing these online markets, however, is underdeveloped.  Designing a marketplace that works well is challenging because the behavior of participants in the market place depends on the design of the marketplace. ..."

Tuesday, February 28, 2017

Incentives in Computer Science--Tim Roughgarden

There was a time when only economists worried about incentives, but as this great looking computer science course by Tim Roughgarden shows, that time is long past...

CS 269I: Incentives in Computer Science



Instructor:
  • Tim Roughgarden (Office hours (note new time): Mondays 12:15-1:15 PM, Gates 474. Email: tim@cs.stanford.edu.)

Prerequisites: Mathematical maturity at the level of undergraduate algorithms (CS161). Programming maturity at the level of 106B/X.
Course Description: Many 21st-century computer science applications require the design of software or systems that interact with multiple self-interested participants. This course will provide students with the vocabulary and modeling tools to reason about such design problems. Emphasis will be on understanding basic economic and game theoretic concepts that are relevant across many application domains, and on case studies that demonstrate how to apply these concepts to real-world design problems. Topics include auction and contest design, equilibrium analysis, cryptocurrencies, design of networks and network protocols, matching markets, reputation systems, and social choice. Possible case studies include BGP routing, Bitcoin, eBay's reputation system, Facebook's advertising mechanism, Mechanical Turk, and dynamic pricing in Uber/Lyft.
General references: Twenty Lectures on Algorithmic Game Theory, Cambridge University Press, 2016. See also the Amazon page.
  • This textbook is based on the course CS364A. The overlap with 269I will be roughly 20-25%. Though if you enjoy this course, you're likely to also enjoy many of the topics in this book.
The following collection is older and targeted more to researchers than to students, but is still useful for several topics.
  • Algorithmic Game Theory, Cambridge University Press, 2007. Read the entire book online by clicking here (look under the "Resources" tab).
We will also draw on the following books for some of the lectures.
Lecture notes

Coursework

Tentative Syllabus (will likely change)

  • Week 1: Introduction to incentives through killer examples.
  • Week 2: Social choice (voting, Arrow's impossibility theorem, etc.).
  • Week 3: Incentives in peer-to-peer and social networks (e.g., incentives in BitTorrent).
  • Week 4: Incentives in communication networks (routing, flow control, etc.).
  • Week 5: Incentives in cryptocurrencies (like Bitcoin).
  • Week 6: Reputation systems. Incentives in crowdsourcing.
  • Week 7: Basic auction theory (eBay, sponsored search auctions).
  • Week 8: Advanced auction theory and mechanism design (Facebook advertising auctions, contest design).
  • Week 9: Scoring rules and prediction markets.
  • Week 10: Lessons from behavioral economics (i.e., how do people make decisions, anyway?).

Detailed Lecture Schedule


  • Lecture 1 (Mon Sept 26): The incentives of the Draw, past and present. Pareto optimality and strategyproofness. College admissions. One-sided vs. two-sided markets. The National Resident Matching Program (NRMP). Supplementary reading:
  • Lecture 2 (Wed Sept 28): Stable matchings. Properties of the deferred acceptance (Gale-Shapley) mechanism. Could college admissions go through a centralized clearinghouse? Supplementary reading:
  • Lecture 3 (Mon Oct 3): Participatory democracy. Strategic voting. Spoilers and the 2000 US election. Majority, plurality, ranked-choice voting, Borda counts. Gibbard-Satterthwaite and the impossibility of reasonable strategyproof voting rules. Arrow's Impossibility Theorem. Compromises, single-peaked preferences, and the median voting rule. Supplementary reading and resources:
    • Participatory budgeting in general and at Stanford.
    • The rank aggregation problem.
    • Reasonably short proofs of the Gibbard-Satterthwaite and Arrow impossibility theorems are here (see Sections 1.2.3 and 1.2.4).
    • Chapter 23 of the Easley/Kleinberg book (see general references).
  • Lecture 4 (Wed Oct 5): Subjective vs. objective interpretations of voting rules. Metaphor: linear regression as the maximum likelihood solution with normally distributed errors. Marquis de Condorcet and majority rule as a maximum likelihood estimator. The Kemeny-Young rule. Knapsack voting and its properties. Supplementary reading and resources:
    • The dramatic life of Marquis de Condorcet.
    • See Pnyx for an implementation of the Kemeny rule.
    • Knapsack voting, by Goel/Krishnaswamy/Sakshuwong (2014).
    • Section 15.2 of the Parkes/Suen book (see general references).
  • Lecture 5 (Mon Oct 10): Incentives in peer-to-peer (P2P) networks. History lesson: Napster, Gnutella, etc. Free riding on Gnutella. Prisoner's Dilemma. Repeated Prisoner's Dilemma: the grim trigger and Tit-for-Tat stategies. Tit-for-tat in the BitTorrent reference client. Strategic clients (BitThief and BitTyrant). Supplementary reading:
  • Lecture 6 (Wed Oct 12): Coordination games. Technology adoption and network cascades. Individual vs. collective preferences in public good problems. Case study: badge design in Stack Overflow, Coursera, etc. Supplementary reading:
  • Lecture 7 (Mon Oct 17): Selfish routing and network over-provisioning. Braess's paradox and Pigou's example. The price of anarchy. Modest over-provisioning guarantees near-optimal routing.
  • Lecture 8 (Wed Oct 19): The Border Gateway Protocol for Internet routing. Stable routings: non-uniqueness and non-existence. Dispute wheels and the convergence of BGP to a unique solution. Incentive issues. Incentive-compatability with path verification. Supplementary reading:
  • Lecture 9 (Mon Oct 24): Incentives in Bitcoin mining. Transactions and the Bitcoin blockchain protocol. Forks. Incentive issues: the 51% attack, the double-spend attack, and selfish mining. Supplementary reading:
  • Lecture 10 (Wed Oct 26): Incentives in crowdsourcing. Bitcoin in a regime with high transaction fees. The DARPA Network Challenge and incentivizing recruitment. Sybil attacks and possible solutions. The "Wisdom of the Crowd": fact or fiction? Herding behavior and information cascades. Supplementary reading:
  • Lecture 11 (Mon Oct 31): Incentives in societal networks (guest lecture by Balaji Prabhakar). "Nudges" for changing behavior. Case studies in Bangalore, Singapore, and at Stanford.
  • Lecture 12 (Wed Nov 2): Adverse selection, moral hazard, and reputation systems. The market for lemons. Analogs in health insurance, the labor market, and online platforms. Moral hazard. Reputational effects in the n-person Prisoner's Dilemma. Whitewashing and the pay-your-dues strategy. Sybil attacks. Case study: the evolution of eBay's reputation system. Supplementary reading:
  • Lecture 13 (Mon Nov 7): Auction design basics. How would you bid in a first-price auction? The Vickrey auction and truthfulness. Welfare maximization. Introduction to sponsored search auctions.
  • Lecture 14 (Wed Nov 9): The theory of first-price auctions. Externalities. VCG: a truthful sponsored search auction. GSP vs. VCG. Supplementary reading:
  • Lecture 15 (Mon Nov 14): Revenue equivalence of the GSP and VCG sponsored search auctions. VCG in AdSense and Facebook. The general VCG mechanism and its truthfulness. Practical issues with VCG. Supplementary materials:
  • Lecture 16 (Wed Nov 16): Revenue maximization. Bayesian optimal auctions. Monopoly prices. Optimality of Vickrey with a monopoly price reserve. Case study: reserve prices in Yahoo! keyword auctions. Prior-independent auctions and the Bulow-Klemperer theorem. Further reading:
  • Lecture 17 (Mon Nov 28): Strictly proper scoring rules. Incentivizing honest opinions. Output agreement. Peer prediction. Further reading:
    • Section 27.4 of the AGT book (see general references).
    • Chapter 17 of the Parkes/Suen book (see general references).
  • Lecture 18 (Wed Nov 30): Prediction markets. The Iowa Electronic Markets and continuous double auctions. The Policy Analysis Market and the Wisdom of Crowds. Market scoring rules and automated market-makers. Further reading:
  • Lecture 19 (Mon Dec 5): Behavioral economics. Time-inconsistent planning: procrastination, choice reduction, and undue obedience. Upper and lower bounds on cost ratios. Naive vs. sophisticated agents. Further reading:
  • Lecture 20 (Wed Dec 7): Fair division. The cut and choose protocol and envy-freeness. The Selfridge-Conway envy-free protocol for 3 players. Recent advances for 4 or more players. The rent division problem, and the maxmin envy-free solution. Further reading:


Thursday, November 5, 2015

Algorithmic economics at Microsoft Research NYC

A rose by any other name (and a postdoc position at Microsoft Research NYC): Algorithmic Economics at Microsoft Research

Here's their description of what they do.

"Research in the Algorithmic Economics group at MSR-NYC spans a wide variety of topics at the interface of economics and computation. Application areas include auctions, crowdsourcing, gaming, information aggregation, machine learning in markets, market interfaces, market makers, monetization, online advertising, optimization, polling, prediction engines, preference elicitation, scoring rules, and social media.

Increasingly, online service design teams require dual expertise in social science and computer science, adding competence in economics, sociology, and psychology to more traditionally recognized requirements like algorithms, interfaces, systems, machine learning, and optimization. Our researchers combine expertise in computer science and economics to bridge the gap between modeling human behavior and engineering web-scale systems."


Call for Postdocs in Algorithmic Economics. Deadline for full consideration: December 8, 2015
"Eligible applicants must hold a Ph.D. in computer science, economics, operations research, or a related field. More specifically, we seek applicants who embody a diverse mix of skills, including a background in computer science (e.g., artificial intelligence or theory), and knowledge of the theoretical and experimental economics literature."

Thursday, January 29, 2015

The App Economy: at SIEPR

I'm looking forward to spending the day tomorrow at SIEPR, learning about how apps are changing the internet and the world. (That's one of the perks of working in Silicon Valley...)

SIEPR Policy Forum, The App Economy, Friday, January 30th, 2015.


Once again the Valley is buzzing with startups, new ventures, and concerns about a bubble.  Our next Forum will look at how the move to mobile and the explosion of entrepreneurial activity is once more driving innovation.

For the App Economy, we want to look at two big questions:

1.  What drives the Mobile and App Economy?   
2.  How is the rise of the App Economy - and mobile technology more generally - changing the rest of the economy?


Agenda

10:00-11:00:  The Mobile App Economy
Simon Khalaf, CEO, Flurry from Yahoo, "The App Economy 2015"
 
Tim Bresnahan, Stanford Economics & SIEPR, Pai-Ling Yin, SIEPR, "How Mobile Platforms Compete"
 

11:00-12:15:  Apps and Data
 "How Mobile and Big Data Change Travel"

Amir Ghodrati, AppAnnie – "App + Data"
 
Steve Tadelis – UC Berkeley Economics, "Consumer Mobile Payments and Finance Big Data"
 
 
1:00-2:15:  The Next 3 Billion Users
Rick Osterloh, CEO, Motorola Mobility,  "Reaching The Next 3 Billion Users"
Ming Zeng, Chief Strategy Officer, Alibaba
Ethan Yeh, Lead Economist, Twitter,  "Apps for the Developing World"
 
 
 
2:15-3:15:  Apps and Profits
Anna Bager,  Senior VP, Internet Advertising Bureau,  "Mobile Ads and Video"
Liran Einav, Stanford Economics,  "Mobile's Impact on Ecommerce at EBay"
 
 
3:15-4:00: Investing in Connected Commerce
Ashwini Chhabra, Uber
Simon Rothman, Partner, Greylock Venture Capital 
 
Reception following conclusion. 


A printable agenda, with biography links, is here.

Registration is available here.    

All current Stanford students are welcome, as well as members of the Stanford community with a Stanford ID.  Others are welcome by invitation.   The event is free   It begins at 10am, Friday January 30th at the Stanford Institute for Economic Policy Research, 366 Galvez Street, Stanford.

Sunday, January 11, 2015

Eva Tardos honored as ICIAM lecturer

Here's the story from Cornell, celebrating Eva Tardos, with a nice view of how market design looks from a computer science perspective:

Computer scientist Eva Tardos honored as ICIAM lecturer

"The International Council for Industrial and Applied Mathematics (ICIAM) has selected Éva Tardos, Cornell’s Jacob Gould Schurman Professor of Computer Science and senior associate dean of the Faculty of Computing and Information Science, to deliver the Olga Taussky-Todd Lecture at the international meeting of applied and industrial mathematicians in Beijing, Aug. 10-14, 2015. It is the most important international event in applied and industrial mathematics, held once every four years.

"This honor is conferred on a “woman who has made outstanding contributions in applied mathematics and/or scientific computation.”

"The lecture is named in memory of Olga Taussky-Todd, whose scientific legacy is in theoretical and applied mathematics, and whose work exemplifies the qualities to be recognized. Lecturers are selected by a committee established by the ICIAM president, with advice from the Association for Women in Mathematics and European Women in Mathematics. Tardos was chosen for her numerous and deep contributions to the fields of combinatorial optimization, discrete algorithms and algorithmic game theory, and her ability to convey the basic ideas and inspire others to pursue them.

"Tardos joined the Cornell faculty in 1989 and became chair of the Department of Computer Science in 2006. Her recent work concerns algorithmic game theory applied to networks. She is most known for her work on network-flow algorithms; approximation algorithms; and quantifying the efficiency of selfish routing, an emerging area of designing systems for “selfish users” who all seek the best possible outcome for themselves, as in competing for bandwidth on the Internet. Her work shows that in many cases the selfish solutions do reasonably well, which diminishes the need for central coordination. In a recent paper, “Network formation in the presence of contagious risk,” she studied the cascade effects of failures, especially relevant after the 2008 economic crisis.

Friday, October 18, 2013

The Warren Center for Network and Data Sciences at Penn

I'm in Philadelphia today, to speak at the inauguration of Penn's new Warren Center for Network and Data Sciences. The directors are Michael Kearns and Ricky Vohra, and they have a multi-disciplinary group of Penn faculty lined up to participate.

Here's the announcement: New Network and Data Sciences Center to Open at Penn

And here's the prospective account of the talk I should try to give...

In addition to Kearns and Vohra, the launch event will feature talks by Eduardo Glandt, the Nemirovsky Family Dean of the School of Engineering and Applied Science, and by Alvin E. Roth of Stanford University, who shared in the 2012 Nobel Prize in Economic Sciences.
Roth’s Nobel-winning work was on a theory for finding mutually beneficial matches between nodes in complex networks. The theory’s applications include the process medical students use to apply for residencies, as well as a program for finding compatible donors and recipients for kidney transplants.    
“We’re planning on funding research projects that, in addition to being scientifically stellar, have some chance of doing social good,” Kearns said. “That’s one of the reasons Roth’s talk is perfect for this launch: network science can show which kidney is compatible with each recipient, and it needs to draw on algorithms, networks and big data sets, but, at the end of the day, this was a research project that saved people’s lives.”

Tuesday, November 8, 2011

Market design in a future of trusted smart markets: paper by Eduardo Azevedo and Eric Budish

The recent NBER conference on market design had a number of remarkable papers. One of them, by Eduardo Azevedo and Eric Budish, seems to me to offer a tantalizing glimpse at what market design might look at some time in the not too distant future when there is a high level of trust in computerized "smart" markets in which a proxy agent reliably acts on your behalf.

At least that's one way to interpret their paper "Strategyproofness in the Large as a Desideratum for Market Design." Among other things, it considers a kind of extension of the revelation principle that would allow a non-strategy-proof mechanism with an attractive Bayesian Nash equilibrium to be converted into a direct mechanism (e.g. one in which agents were asked to reveal their preferences) that would be "strategy proof in the large," (SP-L) i.e. approximately strategy proof in large markets, and strategy proof in the limit. (Another big contribution of their paper is making precise the idea of strategy proofness in the large, which, they argue, may be a desirable criterion when no strategy proof mechanism exists, or when markets are large...the idea is that mechanisms are SP-L but not strategy proof when they allow players' reports to influence prices in ways that vanish in the limit, but mechanisms that aren't even SP-L allow more fundamental manipulations, e.g. they don't give you what you want even when you're a price taker.)

About the revelation principle type mechanisms hey say:
"The construction works as follows. Agents report their types to our mechanism. Our mechanism then calculates the empirical distribution of these types, and then “activates” the Bayes-Nash equilibrium strategy of the original mechanism associated with this empirical. If agents all report their preferences truthfully, this construction will yield the same outcome as the original mechanism in the large-market limit, because the empirical distribution of reported types converges to the underlying true distribution. The subtle part of our construction is what happens if some agents systematically misreport their preferences, e.g., they make mistakes. Suppose the true prior is u , but for some reason the agents other than agent i systematically misreport their preferences, according to distribution m. In a finite market, with sampling error, the empirical distribution of the other agents’ reports is say m^ . As the market grows large, m^ is converging to m, and also i’s influence on the empirical distribution is vanishing. Thus in the limit, our construction will activate the Bayes-Nash equilibrium strategy associated with m. This is the “wrong” prior – but agent i does not care. From his perspective, the other agents are reporting according to m, and then playing the Bayes-Nash equilibrium strategy associated with m, so i too wishes to play the Bayes-Nash equilibrium strategy associated with m. This is exactly what our constructed mechanism does on i’s behalf in the limit. Hence, no matter how the other agents play, i wishes to report his own type truthfully in the limit, i.e., the constructed mechanism is SP-L."

The attraction of such a mechanism of course is that it doesn't depend on the agents reaching a Bayes-Nash equilibrium, which is the problem with mechanisms whose desirability is based on the attractiveness of their equilibrium behavior. Equilibrium may be hard to reach, and such mechanisms may perform badly in practice as a result. But coordination on an equilibrium is much easier when truth telling is a dominant strategy.

The reason this seems like a future mechanism rather than one that is promising for practical application right now is that it is pretty opaque, the opposite of transparent. I can't yet imagine going to e.g. a school district and proposing such a mechanism, which you'd have to sell to parents by saying "tell us your true preferences, and we'll act on your behalf to get you your highest ranked school choice by playing the equilibrium that will arise when we see the choices of all families." The problem is not just that the equilibrium might be hard to describe in the abstract, but that this difficulty is compounded by the fact that assignments will depend in this hard to describe way on an unknown distribution of preferences.

But what might be a tough sell today will be a much easier sell when everyone is accustomed to having their data automatically backed up in the cloud by software that optimizes performance based on things only it observes, and to having their electricity consumption mediated by smart meters that run the air-conditioner in a way that reduces costs based on spot prices, etc.

So...engineering is like that. Just as bridges have gotten longer and stronger over time, there's no reason to think that the market designs of today will be the ones we build in the future. The prospect of confidently putting yourself in the hands of a non-transparent automated expert that you may not understand, a "Martian system" so to speak, may be agreeable to the general public of the future.

(The phrase "Martian system" is one I recall from the early days of expert systems and decision aids. The idea was that you were likely to trust an automated adviser more if you could understand its reasoning, and so judge when its advice was likely to be correct. If you got a non-intuitive answer from an opaque oracle, a "martian system" instead of an expert system, you might worry that the answer was wrong because of wrong inputs or bad construction, and so ignore it. But a transparent system might convince you that a non-intuitive answer was correct, if you were more confident that when it wasn't correct you could tell. But if the martian adviser became so reliable that you could be sure he would not produce an incorrect answer, his opacity might become less of a drawback, since you could rely on him anyway.)

By the way, did I mention that Eduardo is on the job market this year? He's a talented theorist with broad interests who has already made important contributions to matching theory, among other things. Here are his papers. You could hire him.

Friday, May 27, 2011

The market for taxis

With a Start-Up Company, a Ride Is Just a Tap of an App Away
"Uber, a start-up based in San Francisco, offers a cellphone application that is aimed at making using a car service quick and painless.
...
"Uber is not a taxi or limousine company. Instead it operates as a dispatch service, working with local owners of licensed private car companies. Uber provides each car with an iPhone and software that manages incoming requests. When an Uber user needs a ride, the dispatcher and the closest car are notified, and the system sends back an estimate of the pick-up time. While they wait, users can monitor the car’s location on their phone."
...
"Uber, which is available for the iPhone and Android devices, requires users to enter their credit card information when they sign up. When they reach their destination, they can simply hop out, and the ride is charged to the card. Uber gets a percentage of each fare; the rest goes to the car services and drivers."