In different speeches and press releases you say that architecture
cannot be an after thought. What do you mean by that?
One thing about building software is it’s very like building a house.
Actually, believe it or not, in the computer science program at MIT the
professor taught us architecture is everything.
As an example, if you look at a mouse which is very agile running
around the earth, but the fact is if you blow [him] to the size of an
elephant he will collapse on his own weight because [his] bone is
hollow. Even though on the surface it looks like “why couldn’t mouse
scale,” but the fact is structurally [he] was designed for that size and
purpose and (it’s) the same thing [when] building a house.
When you have a foundation that’s good for one or two stories, if
you build ten floors on top of that, it will topple. Same thing for
software, data structures, layering, partition of the system. If you
didn’t think through it doesn’t scale and that’s what I mean by after
thoughts. Obviously you could renovate and you’ve got all the other
things you could potentially do, but it’s often very painful.
In our world, we often talk about how our system is scalable and
that’s reflected in multiple ways. The most fundamental aspect is our
data model. To draw an analogy, when I was young, we used to do our
programming in FORTRAN or C. We learn it’s very easy to write a data
structure for storing elements. You can allocate an array, and using
index to access it or you can build a linked list which takes more
effort. So why wouldn’t everything be array? Everything could, except
if you try to insert something in the middle of an array. It becomes a
very painful task.
So it’s for what purpose it has and how you plan to grow it are
affecting the design decision. You had to think through what is the
purpose, what’s the intent of the scalability, to what range and what
type of operation are you going to exert on that and then you come to a
conclusion as what proper schema or model are you going to have. It is
the thoughtfulness pays off.
Frankly we do make mistakes because it’s very hard to anticipate
what the future will be like ten, fifteen, twenty years from now.
Sometime, even if [we] can anticipate, we may not be able to afford to
build it the right way. So, from time to time, we actually
re-architected or cleaned up the portions which we know we could have
done better. So we don’t just continue building (the) house.
Occasionally we anticipate that the next level is going to exceed our
foundation and we’ll go back and clean up the foundation, strengthen
(it) and the people will continue to build. We are never in the
situation then that we have an unbalanced architecture.
Blue Fish: Looking back on when you founded Documentum, are there things that you wish you had done differently in the architecture?
On the server side there [isn’t] that much
and that’s partly because myself as well as John Newton and Razmik
Abnous, we’re all database guys. So it wasn’t that we guessed right, we
actually have done database for a decade or so. Since we’re the builder
of database engines, we are already familiar with the general issues.
If you think about Documentum, in order to manage content, we need to do
a lot of metadata tracking. That’s very analogous to database
functionality. So building a scalable architecture wasn’t a big deal
for us. The biggest shock waves which we had to adapt to mostly
occurred during the Internet era.
The flexibility of the web infrastructure and how quickly it
evolved took us by surprise. I’m not sure necessarily how I would have
done [it] differently. Living in that era, it is very difficult to
anticipate where the future will lie. I remember, for a while, Marimba
was going to be the thing of the future. Now nobody wants Java on
clients, so imagine the surprise there. Today, even Akamai is no longer
in fashion. So, it isn’t clear that, at that point in time, we could
have done better. We probably could be more aggressive in investing in
that arena, but the truth is the majority of expenditure will be
I was listening to a pitch by Lester Thoreau, a famous economist
at MIT. He was saying that if you were Moses, and you could talk to God
in 1981 and God said, “invest in PCs because you will be shipping a
quarter billion units a year in 2004.” Moses comes down from the
mountain and he buys into the PC company; he will buy Commodore stock,
because the real players like Microsoft did not show up until 1985.
It’s very difficult to bet on the future, even when you know the trends.
But who knows that? Eventually we did manage to rebuild our
architecture reflecting our learning and our competitors haven’t and so
far that has served us well.
John McCormick mentioned this morning
there was a debate as to whether or not [Documentum 5.3] should have
been 6, but now it looks like there will be a 5.4 in about 15-18 months.
He then had one slide with just a few statements about what 6 could be
like and it sounded like you were going to address the repository, which
has not been addressed for quite some time. What are some things that
you think about when you think of 6?
That’s a good question. Our learning from the Internet hasn’t
stopped yet. From the Internet side, the user interaction based on XML,
HTML, the hyperlinks and collaboration models are very powerful. Those
lessons are now reflected in our system. But there is another thing
that’s maybe not as obvious is the Internet turned out to be a highly
agile infrastructure and totally self service. When I bring up my
website, I don’t need to inform everybody else in the world. It just
takes care of itself. When I introduce a new proxy server or a new
cache somewhere, all the other browsers benefit from that effortlessly.
That type of design point is a departure from how we built our content
Our content infrastructure assumes lots of planning; and your
anticipation. You will lay out the infrastructure in the right places;
therefore, we can do the right thing with it. It’s centrally managed.
While that’s not a bad idea, we can do better by taking the idea of the
Internet, the more adaptive infrastructure. If our distributed content
can automatically move around and find its way to the right place
automatically. Cache and replication are interchangeable. Management
of distributed content infrastructure will be easier. That’s part of
the reason we got together with EMC. For the last thirteen years, I
often thought if I had the ability to push more intelligence into the
storage system, the content distribution problem would be easier. For
example, maybe our Documentum software system doesn’t have to know the
network topology. We can delegate the problem to a lower level storage
The big difference between C and C++ versus Java is you don’t have
to keep track of your pointers. You acquire and free pointers. In C,
if you mismatch your acquisition and freeing of pointers, you have a
corrupted system. Java, basically says the computer is so cheap let the
computer keep track those pointers. Memory management maybe not as
efficient or as quickly, but we got so much memory, who cares? And you
can imagine that may not be the absolute best and that may not be
efficient, but it’s pretty effective. Similarly for my content
replication, distribution problem, if my content can propagate to the
right place at the right time, whether it made extra copy or took a
longer path, so long as I can afford it, I done care. If I delete the
original copy, I want every copy removed automatically. With this
approach, management of overall Documentum infrastructure could be
dramatically simplified. To do this, it calls for a re-think of how we
do business at the content infrastructure level.
What about from an UI perspective?
John mentioned about how with 6 you may have users able to drag and drop
different components or sets of components and actually build
applications for functionality through that method.
That’s right. So that’s actually not
something new to 6. It’s something we always have aspired to and build
to. The fact is that 5.25 has already started doing that; all our
components are built in such a way it’s JSR 168 compliant, so while you
can use WDK components in the Web Top, you can also use it in the
portal, with a drag and drop behavior. We will take advantage of this
infrastructure to implement our future applications. Today if you use a
product, especially something like an ERP system, you are using a
predefined interface. You’re seeing all the features laid out in the
way somebody in the software vendor has chosen. They have made the
decision for you. It lacks the context about who you are, what your
preferences are and what you will be doing. It is this way, partly
because the existing infrastructure doesn’t support other ways, and
partly because it’s not a well understood problem.
With the innovation in BPM engine and life cycle technology, you
can imagine, by the time it’s your turn to do something using our
interface, we know actually who you are, therefore, we know your
preference. We also know what your role is and where you are in the
workflow process because it’s part of the context. So, potentially we
can generate an application just for you, just for that role, just for
that task, just for the moment in time. You also don’t have to check in
or out everything manually. With your context, we know which set of
information needs to be found, which one can be put away, which ones
could be retained as institution memories. With this new approach, we
like to take our new user interface to the next level. It’s
task-centric, role-centric and highly personalized. That’s something
we’ve been working on, but you’re right; 6 is probably the time [it]
will materialize as part of that offering.
Will there also be with 6 a
consideration of a closer integration with a hardware component or a
more appliance view of the world as John talked about? I guess the
follow-up question of that is what is the next integration point with
It is something we already know, but during
the last fifteen months what really hit home is the major EMC storage
systems have more computing powers than some of the computing servers
they attached to. Hardware or system guys have known for a long time
that whether something is implemented in software or hardware has more
to do with cost, performance and flexibility than anything else.
Courtesy of VMware, we can easily wrap our system into a software
appliance. If I ship it with an Intel processor and put it in the box,
we just created a hardware appliance. The flexibility is here now.
We’ve been working on embedded systems. People often think an
enterprise system implies gigantic size. Actually, our system footprint
is not that big. We have a lot of products. That’s why it looks big.
The server itself and all the key components are rather small. To
demonstrate this point: our system engineers with a single laptop can
run everything we need to support a proof of concept project. So, we
can fit it all on a laptop, and imagine that we can assemble with Linux,
open source software or bulk licenses of database, app server. Now, we
can create a content management appliance that can be distributed
rapidly. If nothing else, the EMC folks, the classic hardware guys,
will be better at selling the product.
Outside of the content management area, something that’s pretty
cool that I’m working on with our greater EMC team – we are trying to
get all the file systems under control. Today, with Documentum, you can
have an automatic classification process which puts the file in the
right folder with all the access control and life cycle. But this
requires you to check in every file into Documentum. But as hard as I
try, huge amount of the enterprise information is still on the shared
drives or laptops. They’re not inside Documentum. True, the high value
files are in there, but there’s so much more outside that ought to be
If you run an analysis on shared drives, you’ll find out almost
50% of the information is duplicated. Imagine if I send you a
PowerPoint document and what would you do? If you think that’s
relevant, you will immediately put it in your shared folders and where
everybody makes a copy. And EMC thanks you for that. Also then people
come and go. Most of shared drives are like my laptop. I never delete
any file, because I’m always afraid that I may lose something. So, my
drive goes from under 5 Gig to now 80 Gig and I’m running out of
Imagine what could be done with Documentum technology where we
could easily catalog everything. Since we already have a concept of
external content, I have no reason to import the content into my
repository. I could analyze all the files directly. With our content
intelligence service, I can figure out what the concepts are. Using
hash value of files, I can find out whether they are unique. I can
manage your file systems without importing them. For example, when I
find out all the duplicates, I can remove all but one, and fix up the
link to only the single unique file. Now why didn’t we do that before?
Because I don’t control the file system, but EMC has a file system
technology. So, as far as the user is concerned every file remains in
the same place, but they all point to the same file now. Now imagine
the next step. Let’s say, all the contracts related content that should
not be on this shared drive. We can migrate them into Documentum
silently while leaving the forwarding addresses where they are, so when
a user is trying to access a contract through their normal file system
they get it, because we retrieve it from Docbase automatically.
Blue Fish: Like a bread crumb?
Exactly, but the real file is now under
management with record management and all the proper policies are taken
care of automatically.
Blue Fish: The user doesn’t care.
That’s right. Exactly, so we preserve the
location integrity, which is really how people think about that kind of
information. Yet, we’re adding all sorts of value in a totally
non-intrusive manner, and that can completely redefine what is managed
content and what isn’t. It isn’t just copying to Documentum. I can
decide this one was important, therefore move it to a disaster recovery
site, but I don’t want everybody’s MP3 files moved over. Maybe you
want, maybe you don’t. The fact is now you can start choosing it. You
do all those back-ups, archives, disaster recovery without human
interventions. You just have to have a policy.
That’s also part of what EMC calls Information Life Cycle
Management. So I can also tier information to a different storage
types. For instance, Serial ATA. In essence, all your information
could be under management. And, the level of management and the level of
effort you spend can, for the first time, proportion to your business
needs. It’s not an all or nothing exercise and something like that will
be a natural progression of our marriage and we hope to have that out
We know from a Content Management
perspective you have competitors like FileNet and Hummingbird. But from
an Information Life Cycle Management [perspective], if you look at two
years from now, what are the threats to the strategy that you talked
Well, actually Information Life Cycle
Management is something that pretty much everybody embraced. EMC is the
one I think coined the phrase of Information Life Cycle Management
pretty much the same way we talk about Content Life Cycle. The thesis
is information has different values in during its life cycle so you
don’t need to put everything in the most expensive place all the time,
because it doesn’t pay off. It is able to assert where they should be
placed based on application awareness and application agnostic
information like when it was last accessed and stuff like that. You can
dramatically reduce your cost without sacrificing the service level. So
that’s very much a user perspective value proposition.
We really don’t see anybody who’s come out attacking that. The
fact is, we see pretty much all the storage vendors are lining up
embracing that, big or small. We’re talking about IBM, I’m talking
about HP or Hitachi. If you go to the storage conference that’s what
they all talk about. In the storage world, the customer also has a huge
demand for interoperability. So actually that’s an area where
Documentum has done a pretty good job. And EMC also encourages that we
continue that way. The EMC software group, Dave DeWalt, is the head of
that organization. He is aggressively using our past experience best
practice to recruit partners, open up the infrastructure. So instead of
being highly competitive at every turn we’re really looking for a
win-win, or “coop-petition” with pretty much everyone. Cooperation if
possible, but at the end of day we believe we’ll be successful if we
listen to the customer and deliver what they wanted rather than spend
all our energy preserving what little we can hold onto.
Are there things in 5.3 that you’re
especially proud of where the customer said “we really want this” and
you were able to deliver something in 5.3?
Yes. There are several things. I think
the biggest one, which my personal pride and joy, is the BPM engine.
We’ve done a good job in that one. I always had aspirations to do a
good job in that. That’s why we always had that router or workflow in
our system. But this time we really kick it up a notch and take the
best practice of all the BPM engines in the market and sunk enough money
in there. We are being used for some of the largest mortgage processing
business. Those are the high end BPM application. We benchmarked to
show that so we can do millions of transactions per hour for the
workflow process. So that’s something that’s important.
The other one is less obvious. We have spent lots of time
polishing the user interface and we are already getting great feedback
from our customer base. Customers are saying migration from the 5.2 to
5.3 actually requires significantly less customizations. A lot of
things that you used to do are all gone. And lots of our user
interaction requires much less clicks to do it. Ironically, that means
different server architecture enhancements. On the server side, people
can see the benefits with expand functionalities. On UI side, it’s
attention to detail. It’s about polishing. It takes a huge amount of
effort, but it’s not obvious where they are. But if you’re a user,
you’re not hitting those speed bumps anymore and that’s very
significant. These two areas are very visible to user.
Architecturally, the most important thing in this release is the
unification. Unification, it’s a dangerous word to use, it implies that
we weren’t unified before. Actually, unified architecture is a sort of
ongoing religion and process for us. When we acquire a company, we
never buy one to just connect them side by side. We don’t buy market
share. We always pick up domain knowledge, it is expensive, because we
basically not only have to learn what the functionalities are, we have
to re-implement that so it will become part of our core competency and
we’ve been doing that forever. Obviously when we first bought Bulldog,
which is a digital asset management system, and Relevance, we’ve done
the unification. eRoom, by the time we bought it, that it was a pretty
big company already. They did tens of millions $ annually and frankly
they have a lot of know how that we did not have. So this one turned
out to be a multi year unification exercise for technology and culture.
Also TrueArc, the record management system, are being pulled in together
at the same time. That’s painful. But it’s a very exciting product now
that we have finished the job.
The whole idea is you should be able to have your user interface
like we talked about before. You should be able to do anything and
using any functionality inside of Documentum unconstrained by the
application packaging or engineering architecture. Which is a big
difference from un-unified products, because then you basically have to
jump from one UI to another UI and one function doesn’t interoperate
with another. We eliminated the problems so you should be able to do
anything you want unconstrained by technology. So again, it’s focus on
the customer-what they wish to see.
The other big one which affects users
a lot is in the whole idea of information access. So, you made a big
switch from Verity to FAST. Can you talk a little bit about why the
It wasn’t the main point of switching from
Verity to FAST. It’s a very important one. As I had mentioned in the
beginning, architecture should not be an after thought. But
occasionally, you learn something you need to redo. While I rebuilt the
whole server during the late ’90s, search infrastructure was not
rebuilt. We did not know enough about that. So the Internet really
showed us the power of search. Actually it’s all about search and
through search you emulate organization structure and also a
collaboration capability as evidenced by Google.
So, through that process we are always torn between how you search
fast yet secure. So nobody wants to one day expose everything
inadvertently. So we finally figured it out, how to deliver internet
search experience, the sub second response, yet ACL applied. You can
imagine that’s a non trivial exercise. So we end up having to rethink
how we built the system not to mention I have to support XML search with
all the components of the structure or the folder or in the picture, so
it’s a pretty big problem. It’s clearly a problem that we did not
understand fifteen years ago. We finally got it. So other than having
to re-architect that particular portion, we also yanked a whole portion
of server code out. When we did Verity integration, it’s reasonable to
say we have the state of art search engine underneath us. The world had
changed. With our first repository implementation, the maximum number
of objects you can have in your database was four billion. Now it’s not
big enough. We rewrote it to support 256 peta objects.
So Verity, like Documentum – Verity is actually older than
Documentum, its architecture reflects on the older thinking and to my
disappointment they did not rewrite their server. I don’t think we’re
ready to say they definitely could not scale, but we are concerned how
well they could scale. Now given we have a chance to look at
alternatives, we choose to go with a well known internet scale search
engine, which is FAST.
So, we did two things in 5.3. We re-architect our search support
and we pick a different vendor to go with. The other thing we also have
done is we realize we may be right now but internet search technology
evolved at the speed is unheard of. We now have an open system
interface. What that means is now if the user feels Verity is
appropriate we can still work with Verity to build a connector, so the
Verity engine is still the one embedded or it could be FAST or it could
be anybody else that may come along which has better mouse trap that
people like. We make it open. It’s flexible now.
Speaking of the future… looking at
some of the technological opportunities over the last ten years you’ve
had the Internet, you’ve had mobile connectivity, nanotechnology is
emerging. What are some emerging technologies that you think will
impact the space of ILM?
There are quite a few things. There’s
short term and longer term. The short term one I’m particularly psyched
about is RSS, blog and WIKI. I think that’s changed the interaction
model. Imagine anything you put in a folder, I can syndicate through
RSS. Who needs web content management? Well, that’s exaggeration. But
you can imagine management intranet website could be dramatically
easier. Like any new technology, I most likely have underestimated what
it could do. But I see (in) RSS, blog and WIKI (that) they democratized
the sharing of the information. They’ve gone beyond what the web could
do. If you translate it into our world, probably it means contribution
into a content repository world gets easier, simpler. A next level of
frictionless interface is possible.
Longer term, I’m particularly excited by grid computing and higher
communication bandwidth. Internet2 is going to hit the market
commercially. It’s already on campuses and it’s a hundred times the
current internet speeds. It completely redefines what’s near and what’s
far, what caching means, what disaster recovery means. So when I talk
about the content network initially we were anticipating for something
like this. That would be really powerful. That also means where the
software engines sit could be very different. I think ASP in another
life could be successful. Google’s, for an example, Gmail. I hear
people actually build a file system and UI on top of that. Somehow
nobody’s getting nervous that it’s not exactly at next door. So while I
don’t think that particular implementation will win, it is a behavior
modification. People just getting comfortable with doing that type of
thing. I think those type of things are pretty exciting-will change the
Overall I think virtualization, the location transparency-all the
things will finally come to fruition.
Mike Trafton (Senior Architect and
Founder, Blue Fish) said that he remembers talking to you ten years ago
and you had mentioned to him that it was a personal hope of yours that
Documentum would be one of the supporting technologies that would help
find a cure for cancer. Tell me a little bit about that.
Yes. That’s part of reason why we have aggressively invested in
collaborations. One of our taglines, we only had one tagline for a
while, was “Uniting the world through content.” We may not be able to
make anybody smarter, but hopefully we can at least let any individual
know everything there is to be known about a particular topic, so out of
that, an insight will come.
There lies the rational of ECIS which can search all repositories,
data systems or all content management system. That’s also why we
acquired Relevance, our content intelligence service, concept
categorizations and things like that. It is also why we are
aggressively driving (the) content value chain. That means you go for
the biotech, pharmaceutical, contract manufacturer, contract research,
hospital and FDA. We’re trying to link all that together and by getting
everybody into a similar infrastructure to share information. It does
not have to have Documentum as a standard to be viable. We ultimately
will change people’s lives. It may actually save our lives
individually, because mistakes should be made only once and a lesson
learned by everyone.
I think there is another thing, while there’s not enough progress
and I hope to drive that, is knowledge management. Actually, courtesy
of those terrorist activities, there is a renewed interest in analytical
content and discovering of relationships where they are implicit.
That’s the base for knowledge mining and we see more research in that
arena and we’d like to help out as well. The world will just be a
better place because of it.
I know you’re very passionate about
this stuff and there are a lot of founders who after they went public
would have quit or done something else, but you are still very, very
involved. What drives you? What keeps you going?
I think it has a lot to do with why would
somebody start a company? Some does it for the glory, some for the
money and some for the process / journey. I did this because I thought
the world could be a better place with those content value chains and
sharing of learning. If you think about ten or fifteen years ago what
the world was like and then what is today? Look at the type of things
we could do today. We talk about pharmaceutical business. Now new drug
getting to market six months to a year quicker. Power plants (that)
used to take four years to build can be done in two and a half years. A
lot of things get done quicker, faster, more accurate and we are
changing people’s life. At the end of day, I’d like to leave the world
a better place than I started. I also feel if everybody feels that way.
Life will be easier.
I’m sure many people have the same wish, but I’m in good fortune
that I actually could see the change made. That’s a huge reinforcement
for me and for our team. You look at the world which you couldn’t share
information easily until today. It was a different world. It’s a
better world now and I know that in a few more years it will get better.
Hopefully when my girls grow up, they wouldn’t know what the ancient
world was like. What a kick.
Blue Fish (Jes): If you weren’t doing this what would you be doing?
It never occurred to me.
Blue Fish (Jes): This was always what you wanted to do?
I always wanted to change the world for
better. Actually in a way I’m merely trying to restore a “timeline”.
I’m a science fiction fan. In 1978, I graduated from MIT. At that time,
there was Multics, there’s Vax. Software was the way it’s supposed to
be; all the virtualization, VM, things were there. Then there was DOS
which threw the world completely into tangents. A lot of the computer
science knowledge is rendered irrelevant overnight by PCs. Not anymore.
They’re coming back now. And even Microsoft is going to have a
versionable file system soon enough. EMC will have one because of us.
We’re getting back to the right track to really benefit from
computer science. We had a very high level of the innovations then we
sort of go to the lowest common denominator. We’re rebuilding them
back. It takes a while. But by putting the innovation back, it’s a
Blue Fish (Jes):
Have there been any bits of
architecture and pieces of architecture you put into Documentum where
you’ve kind of been surprised that people didn’t share what you wanted
to do with it and they’ve done stuff and you said, that really wasn’t
what I planned or you had the vision and you put the pieces in and then
That’s a good point. So when it comes to
architecture I practice active management. Here’s an old phrase I
learned from our old CEO Jeff Miller. He said, inspect, don’t expect.
Architecture generally is another thing. Even architect of building
will go to work site to inspect. Occasionally we do have incorrect
implementation but by next release we get them out, because I’m firm
believer there’s no right way of doing the wrong thing. So, if somebody
took us down the wrong path, it’s easier just to shoot that problem and
restart again. That’s why a good architecture is expensive to keep it
up, but we have benefited from that. We never evolved them as quickly
as I wished. So while I’ve been working on this for fifteen years, I
thought the world would be this way ten years ago. I’m learning that
the world isn’t moving at a pace I would like it to.
Is there anything else that we didn’t
ask that you wanted to share?
Since a lot of developers read your
articles, I would like to share the sort of corporate culture I want to
Years ago I worked for a company who was famous for that
technology, but not much of solutions. When I started this company I
focused on solution first. Then we built a platform to deliver the
solutions. That view has not changed and will not change.
What we’re looking for are more developers who actually understand
what we want to do and to help us get there. So to change the world
together. What that translates into-the call to action is actually
“talk to us,” say “this area sucks, fix this, and fix that.” Tell us
what we do wrong, and whether we [fix] it together. Just like Mike
(Trafton) helped us get the import utility done. Because our customers
said, how ridiculous is it to have a content management system with no
easy way to import information in there.
We’re trying to encourage … this one becomes a community with
joint planning. We would also like to know and have a discussion such
as “hey we don’t think you should be in this field. (Get) out of here,”
or whatever. I want to be a kinder, gentler company, so it’s not like a
“my way or no way” exercise. It is to encourage people to share our
vision then go there together. That’s why we invest in the developer
community. This content management problem’s way too big for any
individual, any company to solve. We could make a difference on our
own. But we could make a bigger difference quicker together.
I’m getting low on patience. I want to build it all before I
Is it as much fun now as it used to be
or do you have to be so much more careful with what you say and who you
talk to? Has anything changed since you started?
Yes and no. We will talk about the
difference between and start up and here. It’s actually E=MC squared
ironically. When you start up, you have high velocity and acceleration
but less mass. Now you have lots of mass and less velocity. You
multiply them together. That’s equal to impact.
Given the size of EMC, we’re careful about what we say. Actually
more importantly is I feel, and I’m sure a lot of our executives feel
the same way, there is implied obligations with our size. When I’m
small, I can do anything I want, I’m less likely to cause collateral
damages especially in our partner organizations. I view partners like I
see our employees. I know your are not my employees, but I feel if
you’re part of my team. You bought my story, so we’re working together
and we should go to bank together. It’s not like I’m stronger, you’re
weaker, and therefore I win exercise. Now, we are bigger EMC. I could
inadvertently do things which are causing unintended harm and sometime
even corrective action takes time to fix it. I want to get more
feedback or hear people who willing to guide us. We have a willingness
to listen and we want to make sure we are addressing customer’s
Our customers are asking us for solutions that can be deployed
readily, cost hardly anything. Our market is shifting from visionary
early adopters to the mainstream customers who have different
expectations. Responding to them, we have to move into certain areas.
We are trying to foretell our direction in the longer term so everybody
can know where and how to collaborate with us to magnify the power and
benefit. Those ecosystem planning/collaboration are becoming a bigger
issue in my mind than ever.
So is it the difference between
steering a small boat and steering an oil tanker?
Yes. Very much so. Yeah. It’s no longer
sort of a joy ride-let the wind take me. Right. Now I have to plot the
path first, because otherwise I could run onto the islands. It’s more
premeditated. More thoughts going to a decision. But the good news is
when I get there I do get there and with the whole industry together.
Another thing is relevant for the developer is the maturity of our
industry. Part of the reason for our willingness to sell to EMC, to
join forces with EMC is we think the software industry has
Some of my heroes are economists because they see the
macro-picture and they study history closely. In 1900, there are about
1,000 automotive vendors, because every bicycle manufacturer is now
building cars. Thirty years later, there are hundreds of automotive
companies. Fifty years later three automotive companies. Distribution
channel as formed by dealerships, not just manufacturing or design
capability became the key issues.
I think software is gradually turning into similar situation.
Channels matter. Customers want only one neck to choke. They’re
looking for solutions that are interoperable.
If you think about Documentum partnership, it is not only just
building on the platform. What we can also bring [to] our developer
community the distribution channels. That’s why Rob Tarkoff is
strategically driving that. Imagine that in the future, there is the
catalog. From it, you, as a customer, can buy all the accessories. You
know this set of products work together. It’s greater for customers and
greater for our developers.
Because you don’t have to worry about the channel’s setup, life
gets easier. Another angle of looking at us is beyond as a software
supplier, we can expand your market and provide additional channel
access for our partners.