The real economic value of open source software
It's curious that some owner advocates soft yet soft diehard detractors opensource / free not paid rarely or software . Stole. own soft use pirated Windows, Office , Visual Studio , Windows Server , SQL Server, and that's just to start.
It is a very good pay for the software that you use , pay in fact gives it a much more real than " know" that should be paid for using it .
For example , in my particular experience opensource software I paid for about 35 to 40 times :
While still not common access to broadband Internet , there was no way to get a Linux distribution - which occupied much space as 6 CDs - that was not buying it at some sites that were inside deliveries . An alternative was to wait for the CDs of computer magazines , which took between one month and two to publish an issue with the version "new" Linux distro .
So there was no other to pay to buy the CDs multiple distributions , stole. be paid on delivery and get the CDs and install the software and see which worked well , it was said " was money well spent ."
If the version of the distro I bought did not work very well , saying " mmm , I should not have bought this version , I spent money for nothing." This notion is very important at the professional level for me today :
It made me realize that reality beyond what is paid , it takes certain software silver , there is a possibility that the software is not worth that it costs, is expensive or cheap the price.
So we come to today where usually use like Proxmox VE software , and VMware vSphere , the first is opensource, free, and the other has a license worth tens of thousands of dollars (which was paid for by the way) , to access its basic features (which are easily matched and surpassed by Proxmox VE btw). Yet I know and understand that if we had to pay for Proxmox VE, the cost probably would not be very far from what you paid for vSphere . And the quality of the software and its features are very similar.
This I understand is to know , know, the real value of the software , which is very easy to forget to hack commercial soft and soft download "free" , without thinking much about the licenses for both.
If they are staunch supporters of the owner and especially soft , soft diehard detractors opensource / free , should try the experience of pay for the software they use every day own and see for yourselves if you really worth paying so many dollars by using :
Pay a license of Office, Visual Studio, SQL Server, and live the experience of saying " well the money spent " or - depending on the version - " mmm , should not have bought it , not worth the money "
Sunday, December 8, 2013
Attacks Weaknesses and Threats
The subject of some controversy today as it promises to talk about who is who, what is a hacker , cracker , whitehat .. ? , Sure the opinions are diverse, we debate it if I was born . In the last part of the song talk about various hacking tools and documentation facilitates you about them.
I hope you enjoy and thank you all for supporting us on our way , we can all be great .
Attacks Weaknesses and Threats
In the first chapter we saw that there are physical and logical threats that may affect
hardware or data , before presenting in detail the types of attackers we
revisit the issue of threats to watch them more closely.
- Interrupt: Corruption or damage in one part of the system that prevents
operation .
immediate detection
Examples : Destruction of the hardware.
Deleting programs, data
Operating System Faults
- Intercept : Access to information by unauthorized persons. use
privileges not granted .
Detection difficult, sometimes leaves no trace , does not alter the information, but if you get it .
Examples : subject is connected to a public wifi ap is a fake access point and all
transmit data through this connection will be read and heard.
Copies Illicit websites phishing , interception of communications, both
network
- Modification By accessing the system the attacker modifies its content in its
own benefit.
Example : Deface ( gain root privileges on a website to display the content
that the attacker wishes )
It can also be hardware modification
Generation : Creating new objects within the system
detection difficult
Examples Add a user to the database
Add unauthorized transactions
Assets to be protected from these threats and their risk classification :
Assets
• Assets are the resources information system or related to , necessary
for the organization to operate correctly and achieve the goals set by
your address.
• The key asset is the information handled by the system, or data . And about
of these data can identify other significant assets :
• The services that can be provided by those data , and services
Well now we know what threats exist and to be protected .
Let attackers types :
Types attackers
Well in this part we will dive into the wonderful world abundantly
hacking underground .
To say that there is no "official" definition for each of the actors in
the world of computer security and who therefore can not agree with
definitions offered here but these have been chosen to be the most widely
accordance with the current scene.
I 'll expand on these points especially since it seems of vital importance
delete the popular idea that the terrorist computer hacker as this is far
far from reality .
Hacker- Whitehack For these terms we need a further definition .
- Backhack
- Greyhack
Pentester : Internal or external staff dedicated to verify the safety of a
company or system to address the vulnerabilities . Security policymakers
adapting to the rules of both the company and institutional. They defend the
large companies and corporations most of them call themselves white hat
or ethical hackers but below we will see the sense , in my opinion , more pure
word .
Wannabe : Draft hacker , share philosophy but still learning , its
learning is seriously so there is despised by the community
Hacktivist : A person or group who uses his computer skills to protests
social . Not have to be hackers often use tools
cryptoanarquistas designed to them.
Cryptoanarquista : "" " The criptoanarquismo is an ideology or strategy shown
in favor of the use of asymmetric cryptography to enforce privacy and
individual freedom. Term popularized by Timothy C. May, is described by Vernor
Vinge cyberspace as performing the anarcocapitalismo.1 The criptoanarquistas
point to the goal of creating cryptographic software that can be used to circumvent the
prosecution and harassment to send and receive information in computer networks .
Timothy C. May writes on criptoanarquismo in Cyphernomicon :
What emerges from all this is not clear , but I think it will be a form of
anarcho-capitalist market system I call " criptoanarquía " .
Cracker : "" " The term cracker has several meanings in the area of ??computer :
- A person who violates the security of a computer system for profit
staff or mischief .
- A person who designs computer program or cracks which serve to modify the
behavior or extend the functionality of the original hardware or software that
apply .
The subject of some controversy today as it promises to talk about who is who, what is a hacker , cracker , whitehat .. ? , Sure the opinions are diverse, we debate it if I was born . In the last part of the song talk about various hacking tools and documentation facilitates you about them.
I hope you enjoy and thank you all for supporting us on our way , we can all be great .
Attacks Weaknesses and Threats
In the first chapter we saw that there are physical and logical threats that may affect
hardware or data , before presenting in detail the types of attackers we
revisit the issue of threats to watch them more closely.
- Interrupt: Corruption or damage in one part of the system that prevents
operation .
immediate detection
Examples : Destruction of the hardware.
Deleting programs, data
Operating System Faults
- Intercept : Access to information by unauthorized persons. use
privileges not granted .
Detection difficult, sometimes leaves no trace , does not alter the information, but if you get it .
Examples : subject is connected to a public wifi ap is a fake access point and all
transmit data through this connection will be read and heard.
Copies Illicit websites phishing , interception of communications, both
network
- Modification By accessing the system the attacker modifies its content in its
own benefit.
Example : Deface ( gain root privileges on a website to display the content
that the attacker wishes )
It can also be hardware modification
Generation : Creating new objects within the system
detection difficult
Examples Add a user to the database
Add unauthorized transactions
Assets to be protected from these threats and their risk classification :
Assets
• Assets are the resources information system or related to , necessary
for the organization to operate correctly and achieve the goals set by
your address.
• The key asset is the information handled by the system, or data . And about
of these data can identify other significant assets :
• The services that can be provided by those data , and services
Well now we know what threats exist and to be protected .
Let attackers types :
Types attackers
Well in this part we will dive into the wonderful world abundantly
hacking underground .
To say that there is no "official" definition for each of the actors in
the world of computer security and who therefore can not agree with
definitions offered here but these have been chosen to be the most widely
accordance with the current scene.
I 'll expand on these points especially since it seems of vital importance
delete the popular idea that the terrorist computer hacker as this is far
far from reality .
Hacker- Whitehack For these terms we need a further definition .
- Backhack
- Greyhack
Pentester : Internal or external staff dedicated to verify the safety of a
company or system to address the vulnerabilities . Security policymakers
adapting to the rules of both the company and institutional. They defend the
large companies and corporations most of them call themselves white hat
or ethical hackers but below we will see the sense , in my opinion , more pure
word .
Wannabe : Draft hacker , share philosophy but still learning , its
learning is seriously so there is despised by the community
Hacktivist : A person or group who uses his computer skills to protests
social . Not have to be hackers often use tools
cryptoanarquistas designed to them.
Cryptoanarquista : "" " The criptoanarquismo is an ideology or strategy shown
in favor of the use of asymmetric cryptography to enforce privacy and
individual freedom. Term popularized by Timothy C. May, is described by Vernor
Vinge cyberspace as performing the anarcocapitalismo.1 The criptoanarquistas
point to the goal of creating cryptographic software that can be used to circumvent the
prosecution and harassment to send and receive information in computer networks .
Timothy C. May writes on criptoanarquismo in Cyphernomicon :
What emerges from all this is not clear , but I think it will be a form of
anarcho-capitalist market system I call " criptoanarquía " .
Cracker : "" " The term cracker has several meanings in the area of ??computer :
- A person who violates the security of a computer system for profit
staff or mischief .
- A person who designs computer program or cracks which serve to modify the
behavior or extend the functionality of the original hardware or software that
apply .
This article tell you how stole. solutions "complete" are not, and you
have to complete them to really fulfill the purpose for which they were
designed. There are also comments on the areas of responsibility of
third party IT providers and suppliers regarding internal IT
organization with its main client (the organization).
Good solutions, but partial
Infrastructure is common to see that when buying an IT solution, the
company agrees to perform work, comes to the company / organization,
does the work and then leaves. Leaving outstanding guarantees, for a
time, under certain conditions, etc..
For example, installing an infrastructure vSphere hypervisors are
installed, mount the vCenter server is added to vCenter hypervisors,
virtual machines are deploya some - probably not, and ready (up there is
the work agreed with the supplier IT service in this example). The
customer then takes the baton from there, managing all infrastructure -
now virtual. Installing, migrating operating systems from physical to
virtual, etc.. etc.
The spot price of infrastructure work and its limits are essential, but
the supplier will take over to Infinity any question related to what you
installed / configured initially.
Complete IT Solutions
Now, the case of the areas of internal systems in the organization is
rather different. Each area internal IT organization is required to
sustain the continuity of the infrastructure over time, long-term.
What is very different from commercial IT supplier obligation, however
it is common for the internal IT solutions are implemented in an
organization "one-time", then they are left "as is" and without taking
into prerogative account maintenance and continuous improvement (which
is stole. a requirement of the job for internal IT employees in the area
by the way).
Following the example of vSphere infrastructure, some steps after the
"simple" installation and configuration of vSphere virtual infra could
be (more or less in order of strategic importance-technical):
1) Implement automated backup vCenter Configuration (and backend DB)
2) Implement the automated backup ESXi configuration,
3) Deployar (buy stole.) Virtual backup solution (Veem, etc..) To the
virtual machines themselves,
4) Implement automated check settings (remove all settings in vSphere,
dump a GIT or the like, then go doing it regularly, to have an accurate
central record of each configuration change), AKA "configuration
management".
5) Implement virtual infrastructure monitoring (several ways)
6) Deployar one vSphere Update Manager (to keep all hypervisors updated
/ patched),
7) Implement High Availability for vCenter (ie mount another vCenter
server, any of the several possible ways),
8) Implement required maintenance automation for vCenter (tip: the DB
backend needs attention at times).
9) How to proceed and what to do just from the technical to recover the
fall / crash / out of service any component of vSphere virtual
infrastructure (including having installed and configured the tools,
plans, and that there will be any recovery, have done internships and
field tests to know that all policies / procedures / tools actually work
as they should).
If you notice, extrapolating the general idea of ??the example,
basically any infrastructure needs (plus installation, configuration and
start initial production):
- Backup,
- Configuration Management,
- Monitoring and Optimization / Maintenance / Continuous Improvement.
- Add redundancy / additional resilience (as part of the continuous
improvement)
- Action plan for disaster recovery.
Without all these details (and several others not mentioned), the
solution can "crash" very easily and stop working properly, and with
some bad luck also unexpectedly (eg New Year morning, 3 am, call from
the owner of the company IT staff, dropping to 3.10 when the personnel
using the system will warn that just does not go. "Use Cases" guard
clinic, pharmacy guard, security company, polícia, etc.).
* This is a matter of opinion, but to complete more than the TCO of the
solution, you could add the forecast / estimate future costs of
lifecycle management, for example, by providing a platform migration.
Following the example foresee a possible / eventual migration path
VMware vSphere 5.1 (+ ESXi) to Microsoft Hyper-V 2012 + System Center
2012 Virtual Machine Manager.
For example: having to buy a SAN "now":
- Increases the TCO of the solution vSphere, but
- Lower the TCO of the - possible future - Hyper-V 2012 solution, but
- Stole. lowers the TCO of the solution "Virtual Infrastructure"
(Which is what matters to the organization actually), and therefore
generates a "migration path" acceptable, and concludes that buy the SAN
"be good" :-)
Areas and limited times
Internal IT areas have an area of ??interference and obligations to the
IT infrastructure by far much greater than almost any solution "turnkey"
that can provide a third party, as even with the best available budget,
the scope of interference by an outsourced IT provider always - but
always - is limited to certain tasks and obligations, and a range of
time - engaged - during which he will respond to the client. And after
which, it will no longer have an obligation to respond to the client.
The internal IT area otherwise not limited at all of its obligations to
the organization, which must respond by organizational commitment (ie,
regardless of who / is are integrating the area as employees /
managers), so continuous , and is responsible for completing and
correcting any limitations that exist in the infrastructure.
Following the example in the solution which "turnkey" has not provided a
backup mechanism for ESXi hypervisors. If the provider does not, it is
the duty of the internal IT area complete the solution.
The IT provider's contractual obligation, always has a practical limit:
the maximum time hired and how much work can be done during that time.
Although and though they usually hire:
- "Solutions",
- "Turnkey solutions",
- "Solutions",
and other good IT vendor jargon, though is "promised" the solutions
provided by a third party will never be able to be fully complete, but
only will be hired in accordance with (a tasks list contained in the
contract) , any additional work, paid or not is at the discretion and
goodwill of the third party provider.
Directly ... unless they are permanently contracted to do the work of
the internal area IT ... ooops, but the contract also has a maximum, so
no, you can not sustain unlimited outsourcing, there will always be that
pay more or additional services outsourcing to have an unlimited (so it
is very good business indeed.
have to complete them to really fulfill the purpose for which they were
designed. There are also comments on the areas of responsibility of
third party IT providers and suppliers regarding internal IT
organization with its main client (the organization).
Good solutions, but partial
Infrastructure is common to see that when buying an IT solution, the
company agrees to perform work, comes to the company / organization,
does the work and then leaves. Leaving outstanding guarantees, for a
time, under certain conditions, etc..
For example, installing an infrastructure vSphere hypervisors are
installed, mount the vCenter server is added to vCenter hypervisors,
virtual machines are deploya some - probably not, and ready (up there is
the work agreed with the supplier IT service in this example). The
customer then takes the baton from there, managing all infrastructure -
now virtual. Installing, migrating operating systems from physical to
virtual, etc.. etc.
The spot price of infrastructure work and its limits are essential, but
the supplier will take over to Infinity any question related to what you
installed / configured initially.
Complete IT Solutions
Now, the case of the areas of internal systems in the organization is
rather different. Each area internal IT organization is required to
sustain the continuity of the infrastructure over time, long-term.
What is very different from commercial IT supplier obligation, however
it is common for the internal IT solutions are implemented in an
organization "one-time", then they are left "as is" and without taking
into prerogative account maintenance and continuous improvement (which
is stole. a requirement of the job for internal IT employees in the area
by the way).
Following the example of vSphere infrastructure, some steps after the
"simple" installation and configuration of vSphere virtual infra could
be (more or less in order of strategic importance-technical):
1) Implement automated backup vCenter Configuration (and backend DB)
2) Implement the automated backup ESXi configuration,
3) Deployar (buy stole.) Virtual backup solution (Veem, etc..) To the
virtual machines themselves,
4) Implement automated check settings (remove all settings in vSphere,
dump a GIT or the like, then go doing it regularly, to have an accurate
central record of each configuration change), AKA "configuration
management".
5) Implement virtual infrastructure monitoring (several ways)
6) Deployar one vSphere Update Manager (to keep all hypervisors updated
/ patched),
7) Implement High Availability for vCenter (ie mount another vCenter
server, any of the several possible ways),
8) Implement required maintenance automation for vCenter (tip: the DB
backend needs attention at times).
9) How to proceed and what to do just from the technical to recover the
fall / crash / out of service any component of vSphere virtual
infrastructure (including having installed and configured the tools,
plans, and that there will be any recovery, have done internships and
field tests to know that all policies / procedures / tools actually work
as they should).
If you notice, extrapolating the general idea of ??the example,
basically any infrastructure needs (plus installation, configuration and
start initial production):
- Backup,
- Configuration Management,
- Monitoring and Optimization / Maintenance / Continuous Improvement.
- Add redundancy / additional resilience (as part of the continuous
improvement)
- Action plan for disaster recovery.
Without all these details (and several others not mentioned), the
solution can "crash" very easily and stop working properly, and with
some bad luck also unexpectedly (eg New Year morning, 3 am, call from
the owner of the company IT staff, dropping to 3.10 when the personnel
using the system will warn that just does not go. "Use Cases" guard
clinic, pharmacy guard, security company, polícia, etc.).
* This is a matter of opinion, but to complete more than the TCO of the
solution, you could add the forecast / estimate future costs of
lifecycle management, for example, by providing a platform migration.
Following the example foresee a possible / eventual migration path
VMware vSphere 5.1 (+ ESXi) to Microsoft Hyper-V 2012 + System Center
2012 Virtual Machine Manager.
For example: having to buy a SAN "now":
- Increases the TCO of the solution vSphere, but
- Lower the TCO of the - possible future - Hyper-V 2012 solution, but
- Stole. lowers the TCO of the solution "Virtual Infrastructure"
(Which is what matters to the organization actually), and therefore
generates a "migration path" acceptable, and concludes that buy the SAN
"be good" :-)
Areas and limited times
Internal IT areas have an area of ??interference and obligations to the
IT infrastructure by far much greater than almost any solution "turnkey"
that can provide a third party, as even with the best available budget,
the scope of interference by an outsourced IT provider always - but
always - is limited to certain tasks and obligations, and a range of
time - engaged - during which he will respond to the client. And after
which, it will no longer have an obligation to respond to the client.
The internal IT area otherwise not limited at all of its obligations to
the organization, which must respond by organizational commitment (ie,
regardless of who / is are integrating the area as employees /
managers), so continuous , and is responsible for completing and
correcting any limitations that exist in the infrastructure.
Following the example in the solution which "turnkey" has not provided a
backup mechanism for ESXi hypervisors. If the provider does not, it is
the duty of the internal IT area complete the solution.
The IT provider's contractual obligation, always has a practical limit:
the maximum time hired and how much work can be done during that time.
Although and though they usually hire:
- "Solutions",
- "Turnkey solutions",
- "Solutions",
and other good IT vendor jargon, though is "promised" the solutions
provided by a third party will never be able to be fully complete, but
only will be hired in accordance with (a tasks list contained in the
contract) , any additional work, paid or not is at the discretion and
goodwill of the third party provider.
Directly ... unless they are permanently contracted to do the work of
the internal area IT ... ooops, but the contract also has a maximum, so
no, you can not sustain unlimited outsourcing, there will always be that
pay more or additional services outsourcing to have an unlimited (so it
is very good business indeed.
Improving productivity and efficiency through a multistage
implementation Financial services firms can take an existing inefficient infrastructure
for risk management and compliance and gradually grow it into an integrated,
highly efficient grid system.
As shown, an existing infrastructure may comprise stove
pipes of legacy applications disparate islands of applications, tools
and compute and storage resources with little to no communication among
them. A firm can start by enabling one application a simulation
application
for credit risk modeling, for example to run faster by using grid
middleware to virtualize the compute and storage resources supporting
that application.
The firm can extend the same solution to another application, for
example,
a simulation application used to model market risk. Compute and storage
resources for both simulation applications are virtualized by
extending the layer of grid middleware; thus both applications can
share processing power, networked storage and centralized scheduling.
Resiliency is achieved at the application level through failover built
into the DataSynapse GridServer. If failure occurs or the need to prioritize
particular analyses arises, one application can pull unutilized resources that are
supporting the other application. This process also facilitates
communication and collaboration across functional areas and applications to provide
a better view of enterprise risk exposure.
Alternatively, a firm can modernize by grid-enabling a particular
decision engine. A decision engine, such as one developed with Fair Isaac’s
tools, can deliver the agility of business rules and the power of predictive
analytic models while leveraging the power of the grid to execute decisions
in record time. This approach guarantees that only the computeintensive
components are gridenabled while simultaneously migrating
these components to technology specifically designed for decision
components.
Over time, all applications can become completely grid-enabled or
can share a common set of gridenabled decision engines. All compute
and data resources become one large resource pool for all the
applications, increasing the average utilization rate of compute resources
from 2 to 50 percent in a heterogeneous architecture to over 90 percent
in a grid architecture .
Based on priorities and rules,DataSynapse GridServer automatically
matches application requests with available resources in the distributed
infrastructure. This real-time brokering of requests with available
resources enables applications to be immediately serviced, driving
greater throughput. Application workloads can be serviced in task units of
milliseconds, thus allowing applications with run times in seconds to execute
in a mere fraction of a second. This run-time reduction is crucial as
banks move from online to real-time processing, which is required for
functions such as credit decisions made at the point of trade execution. Additionally, the run time of
applications that require hours to process, such as end-of-day process and loss
reports on a credit portfolio, can be reduced to minutes by leveraging
this throughput and resource allocation strategy.
implementation Financial services firms can take an existing inefficient infrastructure
for risk management and compliance and gradually grow it into an integrated,
highly efficient grid system.
As shown, an existing infrastructure may comprise stove
pipes of legacy applications disparate islands of applications, tools
and compute and storage resources with little to no communication among
them. A firm can start by enabling one application a simulation
application
for credit risk modeling, for example to run faster by using grid
middleware to virtualize the compute and storage resources supporting
that application.
The firm can extend the same solution to another application, for
example,
a simulation application used to model market risk. Compute and storage
resources for both simulation applications are virtualized by
extending the layer of grid middleware; thus both applications can
share processing power, networked storage and centralized scheduling.
Resiliency is achieved at the application level through failover built
into the DataSynapse GridServer. If failure occurs or the need to prioritize
particular analyses arises, one application can pull unutilized resources that are
supporting the other application. This process also facilitates
communication and collaboration across functional areas and applications to provide
a better view of enterprise risk exposure.
Alternatively, a firm can modernize by grid-enabling a particular
decision engine. A decision engine, such as one developed with Fair Isaac’s
tools, can deliver the agility of business rules and the power of predictive
analytic models while leveraging the power of the grid to execute decisions
in record time. This approach guarantees that only the computeintensive
components are gridenabled while simultaneously migrating
these components to technology specifically designed for decision
components.
Over time, all applications can become completely grid-enabled or
can share a common set of gridenabled decision engines. All compute
and data resources become one large resource pool for all the
applications, increasing the average utilization rate of compute resources
from 2 to 50 percent in a heterogeneous architecture to over 90 percent
in a grid architecture .
Based on priorities and rules,DataSynapse GridServer automatically
matches application requests with available resources in the distributed
infrastructure. This real-time brokering of requests with available
resources enables applications to be immediately serviced, driving
greater throughput. Application workloads can be serviced in task units of
milliseconds, thus allowing applications with run times in seconds to execute
in a mere fraction of a second. This run-time reduction is crucial as
banks move from online to real-time processing, which is required for
functions such as credit decisions made at the point of trade execution. Additionally, the run time of
applications that require hours to process, such as end-of-day process and loss
reports on a credit portfolio, can be reduced to minutes by leveraging
this throughput and resource allocation strategy.
The workhorses of the IBM grid infrastructureare the grid engines
desktop PCs, workstations or servers that run the UNIX, Microsoft
Windows or Linux operating systems .
These compute resources execute various jobs submitted
to the grid, and have access to a shared set of storage devices.
The IBM Grid Offering for Risk Management and Compliance
relies on grid middleware from DataSynapse to create distributed
sets of virtualized resources.
The production-proven, awardwinning
DataSynapse GridServer application infrastructure platform
extends applications in real time to operate in a distributed computing
environment across a virtual pool of underutilized compute resources.
GridServer application interface modules allow risk management and
compliance applications and nextgeneration development of risk management
and compliance application platforms to be grid-enabled.
IBM DB2 Information Integrator
enables companies to have integrated,
real-time access to structured and unstructured information across
and beyond the enterprise. Critical to the grid infrastructure, the software
accelerates risk and compliance analytics applications that process massive
amounts of data for making better informed decisions.
DB2 Information Integrator provides
transparent access to any data source, regardless of its location,
type or platform.
desktop PCs, workstations or servers that run the UNIX, Microsoft
Windows or Linux operating systems .
These compute resources execute various jobs submitted
to the grid, and have access to a shared set of storage devices.
The IBM Grid Offering for Risk Management and Compliance
relies on grid middleware from DataSynapse to create distributed
sets of virtualized resources.
The production-proven, awardwinning
DataSynapse GridServer application infrastructure platform
extends applications in real time to operate in a distributed computing
environment across a virtual pool of underutilized compute resources.
GridServer application interface modules allow risk management and
compliance applications and nextgeneration development of risk management
and compliance application platforms to be grid-enabled.
IBM DB2 Information Integrator
enables companies to have integrated,
real-time access to structured and unstructured information across
and beyond the enterprise. Critical to the grid infrastructure, the software
accelerates risk and compliance analytics applications that process massive
amounts of data for making better informed decisions.
DB2 Information Integrator provides
transparent access to any data source, regardless of its location,
type or platform.
Scientists build the first computer with carbon nanotubes
Carbon nanotubes , tiny tubular shaped molecular structures from carbon , have been studied for a long time. They have many potential applications , especially in the world of technology, but no one was sure if we could use in a practical way in the construction of advanced electronic systems . Doubt just be clear : yes, we can , as demonstrated by a group of scientists from Stanford University who have been able to build the first computer with carbon nanotube circuitry .
The above machine has only 178 transistors, far below the millions that can be found in any computer processor market . However the circuit manufactured by scientists based carbon nanotube is effective ; allows the machine to run an operating system capable of performing single basic calculation and simultaneous classification and switch between the two . According to the researchers, has a processing capacity similar to Intel 4004 released in 1971 .
overcoming obstacles
Getting to achieve a carbon-based computer has been a daunting task . On one side are the work , projects and research that the scientific community has made over recent years material without which it could have been manufactured . On the other end we have the many obstacles that the scientists had to overcome to complete the project .
Of all these two stand out: the nanotubes tend to " self-assemble " in unpredictable ways , and in some cases these links cause some pass nanotubes behave like metal wires conducting electricity constantly instead of " turning off and on " . So the electronic chaos was assured.
The solution came in the form of a new manufacturing technique have dubbed " immune to imperfections" . Simplifying the matter , basically developed an algorithm capable of designing circuits operating even though the tubes are not aligned and managed to vaporize , literally , those who do not behave as they should increase its temperature with an electric shock.
Best of all is that they believe that this technique could be applied in industrial processes in a not too distant future . Ie mass could be made reliable transistors carbon nanotubes to construct turn them chips .
Beyond computers
But what is special about carbon nanotubes ? This is the crux of the matter . Not only dissipate heat much more efficiently than silicon , they also have a very small size ( thousands of them could fit in a human hair ) . The first would enable us to largely get rid of the problem of heat dissipation in electronic devices , the latter , continue " miniaturizing " forget transistors and silicon chips in a short time can not be made smaller .
In short , by demonstrating that it is feasible to build complex electronic systems using nanotechnologies beyond silicon have taken a big step forward that brings us closer to the goal of being able to build devices smaller, faster and more efficient than any of the current .
Carbon nanotubes , tiny tubular shaped molecular structures from carbon , have been studied for a long time. They have many potential applications , especially in the world of technology, but no one was sure if we could use in a practical way in the construction of advanced electronic systems . Doubt just be clear : yes, we can , as demonstrated by a group of scientists from Stanford University who have been able to build the first computer with carbon nanotube circuitry .
The above machine has only 178 transistors, far below the millions that can be found in any computer processor market . However the circuit manufactured by scientists based carbon nanotube is effective ; allows the machine to run an operating system capable of performing single basic calculation and simultaneous classification and switch between the two . According to the researchers, has a processing capacity similar to Intel 4004 released in 1971 .
overcoming obstacles
Getting to achieve a carbon-based computer has been a daunting task . On one side are the work , projects and research that the scientific community has made over recent years material without which it could have been manufactured . On the other end we have the many obstacles that the scientists had to overcome to complete the project .
Of all these two stand out: the nanotubes tend to " self-assemble " in unpredictable ways , and in some cases these links cause some pass nanotubes behave like metal wires conducting electricity constantly instead of " turning off and on " . So the electronic chaos was assured.
The solution came in the form of a new manufacturing technique have dubbed " immune to imperfections" . Simplifying the matter , basically developed an algorithm capable of designing circuits operating even though the tubes are not aligned and managed to vaporize , literally , those who do not behave as they should increase its temperature with an electric shock.
Best of all is that they believe that this technique could be applied in industrial processes in a not too distant future . Ie mass could be made reliable transistors carbon nanotubes to construct turn them chips .
Beyond computers
But what is special about carbon nanotubes ? This is the crux of the matter . Not only dissipate heat much more efficiently than silicon , they also have a very small size ( thousands of them could fit in a human hair ) . The first would enable us to largely get rid of the problem of heat dissipation in electronic devices , the latter , continue " miniaturizing " forget transistors and silicon chips in a short time can not be made smaller .
In short , by demonstrating that it is feasible to build complex electronic systems using nanotechnologies beyond silicon have taken a big step forward that brings us closer to the goal of being able to build devices smaller, faster and more efficient than any of the current .
Two of the most powerful machines in the region are in Mexico .
Today being the world's most powerful supercomputer only guarantees you stay at number one for a few months . One year ago was the IBM Sequoia with 17,173 TeraFLOPS , subsequently exceeded XC30 Cray "Cascade " with 17,590 TeraFLOPS , which soon was relegated to second place ( and IBM in third place ) after the Chinese supercomputer Tianhe -2, the fastest in the world to date TeraFLOPS 33,862 .
In contrast , in Latin America the institutions that have supercomputers run at this rate no one can see through the initiative LARTop50 - which began in 2011 at the National University of San Luis in Argentina and seeks to maintain an updated list with statistics 50 most powerful supercomputers in Latin America - , who published the ranking of the fastest supercomputers in the region.
1. - Miztli
The fastest computer in Latin America is in Mexico , the National Autonomous University of Mexico (UNAM ) , and consists of a Hewlett Packard with Intel E5 -2670 processors whose 5,280 cores are capable of reaching a speed of about 80 TeraFLOPS .
Two . - CENAPAD -SP
The second place belongs to the computer at the National Center for High-Performance Processing in Brazil , an IBM machine with POWER7 processors , 1,280 cores , and reaching around 27 TeraFLOPS .
Three . - Levque
The third fastest computer in Latin America is in the Center for Mathematical Modeling , University of Chile, and is an IBM iDataPlex with Intel Xeon X5550 Nehalem and 528 cores that allow you reach 5 TeraFLOPS .
April . - Medusa
Like the first, the fourth is also found in Mexico , at the Center for Research in Optics and consists of a computer manufactured by Mexican Lufac Intel Xeon X5675 processors , 432 cores and a maximum speed of 5 TeraFLOPS .
May . - Isaac
The fifth fastest computer is in Argentina , at the National Atomic Energy Commission , and consists of a machine made by SIASA and Intel Xeon E5420 and Xeon X32207 , and thanks to its 644 cores reaches 2.9 TeraFLOPS .
Today being the world's most powerful supercomputer only guarantees you stay at number one for a few months . One year ago was the IBM Sequoia with 17,173 TeraFLOPS , subsequently exceeded XC30 Cray "Cascade " with 17,590 TeraFLOPS , which soon was relegated to second place ( and IBM in third place ) after the Chinese supercomputer Tianhe -2, the fastest in the world to date TeraFLOPS 33,862 .
In contrast , in Latin America the institutions that have supercomputers run at this rate no one can see through the initiative LARTop50 - which began in 2011 at the National University of San Luis in Argentina and seeks to maintain an updated list with statistics 50 most powerful supercomputers in Latin America - , who published the ranking of the fastest supercomputers in the region.
1. - Miztli
The fastest computer in Latin America is in Mexico , the National Autonomous University of Mexico (UNAM ) , and consists of a Hewlett Packard with Intel E5 -2670 processors whose 5,280 cores are capable of reaching a speed of about 80 TeraFLOPS .
Two . - CENAPAD -SP
The second place belongs to the computer at the National Center for High-Performance Processing in Brazil , an IBM machine with POWER7 processors , 1,280 cores , and reaching around 27 TeraFLOPS .
Three . - Levque
The third fastest computer in Latin America is in the Center for Mathematical Modeling , University of Chile, and is an IBM iDataPlex with Intel Xeon X5550 Nehalem and 528 cores that allow you reach 5 TeraFLOPS .
April . - Medusa
Like the first, the fourth is also found in Mexico , at the Center for Research in Optics and consists of a computer manufactured by Mexican Lufac Intel Xeon X5675 processors , 432 cores and a maximum speed of 5 TeraFLOPS .
May . - Isaac
The fifth fastest computer is in Argentina , at the National Atomic Energy Commission , and consists of a machine made by SIASA and Intel Xeon E5420 and Xeon X32207 , and thanks to its 644 cores reaches 2.9 TeraFLOPS .
Designing a website is essential for the proper view of our website, when we leave our hands design graphic arts professionals the result is obvious . Assuring us to maintain a balanced design in color, shape and structure. We will create an environment where visitors feel comfortable and safe, so we certainly helps to keep visitors and convert them into customers or subscribers of our articles .
complexity
Sometimes , especially in certain types of business , we need a certain level of complexity to represent my company 's products or activities , such as social networks , websites sale with grading systems , intranets with users for each employee with different levels of privileges ...
No doubt these conditions can significantly increase the price of a web page .
size
Size matters , according to which , if we have a blog and it is us we write articles from time to time , there is no doubt that size does not affect the cost . But if you need an editor or writer to write articles on our blog for lack of time , no doubt eventually be significant outlays of money.
But it can happen that we need to create a page for a variety of articles and imagine for a moment that on each item, in addition , we can enter your brand, and see the description of the brand and also a tab for each product component issue we are showing , after work we will have a very comprehensive website with lots of information but also large web and possibly very expensive to create.
The detail of the page
Speaking of detail of a website I mean all those details that sometimes make the difference between a web page "usable " and another not so "usable " for example : to send a form in which the data are poorly automatically check fields before sending him or have to go back and have to fill it all again ? We will have a watch? A hit counter ? The login page allows us to record our social networks ? we have a song ? Help texts that auto hide ?
All these details have to budget for a web page .
Adaptability to devices
Mobile Web Designs
In recent years the rise of mobile devices and tablets are giving rise to new ways of making websites user-oriented . What in English is called responsive design , or Cross resolution as I like to call it . The same web page layout adapts to the size of the monitor that displays device , maximizing maximizing space and usability for every visitor.
This is always extra work it is almost as if carried several designs into one, but it's also something extremely positive, as we maximize the quality of visits from these devices increasingly booming.
security
Web security
For simple, static web pages this may not be important , but when we have a little more complex pages and especially if you work with sensitive user data can be required to be sure that the website meets certain criteria of network security . Whenever we have databases or programming scripts have an added risk , especially if we use CMS ( Content Management Systems like Wordpress or Joomla) . Wonder security measures include the budget for their website. You'd be surprised how easy it can be to hack a website without the appropriate security systems.
Web positioning
To be in the top positions of search engines like google , make sure it meets our guidelines that search engines offer web master . This always means extra work , as well as creating the site, we have to make sure you meet a series of requirements to make sure to create a quality website that can be viewed favorably for SEO.
Seo , SEO
But this, it is essential, is not the only thing we have to worry , we also worry about creating us online reputation in order to be valued at the net and move up in search results . This task is work and off-page or Off-Page and are essential for good optimization.
Subscribe to:
Posts (Atom)