{"global":{"lastError":{},"locale":"en","locales":{"data":[{"id":"de","name":"Deutsch"},{"id":"en","name":"English"}],"loading":false,"error":false},"currency":{"id":49,"name":"EUR"},"currencies":{"data":[{"id":49,"name":"EUR"},{"id":124,"name":"RUB"},{"id":153,"name":"UAH"},{"id":155,"name":"USD"}],"loading":false,"error":false},"translations":{"company":{"role-vendor":{"ru":"Производитель","_type":"localeString","en":"Vendor"},"role-supplier":{"ru":"Поставщик","_type":"localeString","en":"Supplier"},"products-popover":{"en":"Products","de":"die produkte","ru":"Продукты","_type":"localeString"},"introduction-popover":{"en":"introduction","ru":"внедрения","_type":"localeString"},"partners-popover":{"ru":"партнеры","_type":"localeString","en":"partners"},"update-profile-button":{"ru":"Обновить профиль","_type":"localeString","en":"Update profile"},"read-more-button":{"ru":"Показать ещё","_type":"localeString","en":"Show more"},"hide-button":{"_type":"localeString","en":"Hide","ru":"Скрыть"},"user-implementations":{"ru":"Внедрения","_type":"localeString","en":"Deployments"},"categories":{"en":"Categories","ru":"Компетенции","_type":"localeString"},"description":{"ru":"Описание","_type":"localeString","en":"Description"},"role-user":{"_type":"localeString","en":"User","ru":"Пользователь"},"partnership-vendors":{"_type":"localeString","en":"Partnership with vendors","ru":"Партнерство с производителями"},"partnership-suppliers":{"_type":"localeString","en":"Partnership with suppliers","ru":"Партнерство с поставщиками"},"reference-bonus":{"ru":"Бонус за референс","_type":"localeString","en":"Bonus 4 reference"},"partner-status":{"_type":"localeString","en":"Partner status","ru":"Статус партнёра"},"country":{"ru":"Страна","_type":"localeString","en":"Country"},"partner-types":{"ru":"Типы партнеров","_type":"localeString","en":"Partner types"},"branch-popover":{"ru":"область деятельности","_type":"localeString","en":"branch"},"employees-popover":{"_type":"localeString","en":"number of employees","ru":"количество сотрудников"},"partnership-programme":{"ru":"Партнерская программа","_type":"localeString","en":"Partnership program"},"partner-discounts":{"ru":"Партнерские скидки","_type":"localeString","en":"Partner discounts"},"registered-discounts":{"_type":"localeString","en":"Additional benefits for registering a deal","ru":"Дополнительные преимущества за регистрацию сделки"},"additional-advantages":{"ru":"Дополнительные преимущества","_type":"localeString","en":"Additional Benefits"},"additional-requirements":{"en":"Partner level requirements","ru":"Требования к уровню партнера","_type":"localeString"},"certifications":{"en":"Certification of technical specialists","ru":"Сертификация технических специалистов","_type":"localeString"},"sales-plan":{"ru":"Годовой план продаж","_type":"localeString","en":"Annual Sales Plan"},"partners-vendors":{"en":"Partners-vendors","ru":"Партнеры-производители","_type":"localeString"},"partners-suppliers":{"ru":"Партнеры-поставщики","_type":"localeString","en":"Partners-suppliers"},"all-countries":{"en":"All countries","ru":"Все страны","_type":"localeString"},"supplied-products":{"en":"Supplied products","ru":"Поставляемые продукты","_type":"localeString"},"vendored-products":{"_type":"localeString","en":"Produced products","ru":"Производимые продукты"},"vendor-implementations":{"_type":"localeString","en":"Produced deployments","ru":"Производимые внедрения"},"supplier-implementations":{"ru":"Поставляемые внедрения","_type":"localeString","en":"Supplied deployments"},"show-all":{"ru":"Показать все","_type":"localeString","en":"Show all"},"not-yet-converted":{"ru":"Данные модерируются и вскоре будут опубликованы. Попробуйте повторить переход через некоторое время.","_type":"localeString","en":"Data is moderated and will be published soon. Please, try again later."},"schedule-event":{"ru":"Pасписание событий","_type":"localeString","en":"Events schedule"},"implementations":{"ru":"Внедрения","_type":"localeString","en":"Deployments"},"register":{"en":"Register","ru":"Регистрация ","_type":"localeString"},"login":{"_type":"localeString","en":"Login","ru":"Вход"},"auth-message":{"ru":"Для просмотра ивентов компании авторизируйтесь или зарегистрируйтесь на сайт.","_type":"localeString","en":"To view company events please log in or register on the sit."},"company-presentation":{"_type":"localeString","en":"Company presentation","ru":"Презентация компании"}},"header":{"help":{"ru":"Помощь","_type":"localeString","en":"Help","de":"Hilfe"},"how":{"en":"How does it works","de":"Wie funktioniert es","ru":"Как это работает","_type":"localeString"},"login":{"en":"Log in","de":"Einloggen","ru":"Вход","_type":"localeString"},"logout":{"ru":"Выйти","_type":"localeString","en":"Sign out"},"faq":{"de":"FAQ","ru":"FAQ","_type":"localeString","en":"FAQ"},"references":{"de":"References","ru":"Мои запросы","_type":"localeString","en":"Requests"},"solutions":{"en":"Solutions","ru":"Возможности","_type":"localeString"},"find-it-product":{"ru":"Подбор и сравнение ИТ продукта","_type":"localeString","en":"Selection and comparison of IT product"},"autoconfigurator":{"_type":"localeString","en":" Price calculator","ru":"Калькулятор цены"},"comparison-matrix":{"_type":"localeString","en":"Comparison Matrix","ru":"Матрица сравнения"},"roi-calculators":{"ru":"ROI калькуляторы","_type":"localeString","en":"ROI calculators"},"b4r":{"_type":"localeString","en":"Bonus for reference","ru":"Бонус за референс"},"business-booster":{"_type":"localeString","en":"Business boosting","ru":"Развитие бизнеса"},"catalogs":{"ru":"Каталоги","_type":"localeString","en":"Catalogs"},"products":{"_type":"localeString","en":"Products","ru":"Продукты"},"implementations":{"ru":"Внедрения","_type":"localeString","en":"Deployments"},"companies":{"ru":"Компании","_type":"localeString","en":"Companies"},"categories":{"ru":"Категории","_type":"localeString","en":"Categories"},"for-suppliers":{"en":"For suppliers","ru":"Поставщикам","_type":"localeString"},"blog":{"en":"Blog","ru":"Блог","_type":"localeString"},"agreements":{"ru":"Сделки","_type":"localeString","en":"Deals"},"my-account":{"ru":"Мой кабинет","_type":"localeString","en":"My account"},"register":{"_type":"localeString","en":"Register","ru":"Зарегистрироваться"},"comparison-deletion":{"ru":"Удаление","_type":"localeString","en":"Deletion"},"comparison-confirm":{"en":"Are you sure you want to delete","ru":"Подтвердите удаление","_type":"localeString"},"search-placeholder":{"_type":"localeString","en":"Enter your search term","ru":"Введите поисковый запрос"},"my-profile":{"en":"My profile","ru":"Мои данные","_type":"localeString"},"about":{"en":"About Us","_type":"localeString"},"it_catalogs":{"_type":"localeString","en":"IT catalogs"},"roi4presenter":{"_type":"localeString","en":"Roi4Presenter"},"roi4webinar":{"en":"Pitch Avatar","_type":"localeString"},"sub_it_catalogs":{"en":"Find IT product","_type":"localeString"},"sub_b4reference":{"_type":"localeString","en":"Get reference from user"},"sub_roi4presenter":{"_type":"localeString","en":"Make online presentations"},"sub_roi4webinar":{"en":"Create an avatar for the event","_type":"localeString"},"catalogs_new":{"en":"Products","_type":"localeString"},"b4reference":{"_type":"localeString","en":"Bonus4Reference"},"it_our_it_catalogs":{"en":"Our IT Catalogs","_type":"localeString"},"it_products":{"en":"Find and compare IT products","_type":"localeString"},"it_implementations":{"_type":"localeString","en":"Learn implementation reviews"},"it_companies":{"_type":"localeString","en":"Find vendor and company-supplier"},"it_categories":{"_type":"localeString","en":"Explore IT products by category"},"it_our_products":{"_type":"localeString","en":"Our Products"},"it_it_catalogs":{"en":"IT catalogs","_type":"localeString"}},"footer":{"copyright":{"en":"All rights reserved","de":"Alle rechte vorbehalten","ru":"Все права защищены","_type":"localeString"},"company":{"de":"Über die Firma","ru":"О компании","_type":"localeString","en":"My Company"},"about":{"ru":"О нас","_type":"localeString","en":"About us","de":"Über uns"},"infocenter":{"ru":"Инфоцентр","_type":"localeString","en":"Infocenter","de":"Infocenter"},"tariffs":{"de":"Tarife","ru":"Тарифы","_type":"localeString","en":"Subscriptions"},"contact":{"ru":"Связаться с нами","_type":"localeString","en":"Contact us","de":"Kontaktiere uns"},"marketplace":{"de":"Marketplace","ru":"Marketplace","_type":"localeString","en":"Marketplace"},"products":{"en":"Products","de":"Produkte","ru":"Продукты","_type":"localeString"},"compare":{"de":"Wähle und vergleiche","ru":"Подобрать и сравнить","_type":"localeString","en":"Pick and compare"},"calculate":{"en":"Calculate the cost","de":"Kosten berechnen","ru":"Расчитать стоимость","_type":"localeString"},"get_bonus":{"_type":"localeString","en":"Bonus for reference","de":"Holen Sie sich einen Rabatt","ru":"Бонус за референс"},"salestools":{"en":"Salestools","de":"Salestools","ru":"Salestools","_type":"localeString"},"automatization":{"ru":"Автоматизация расчетов","_type":"localeString","en":"Settlement Automation","de":"Abwicklungsautomatisierung"},"roi_calcs":{"_type":"localeString","en":"ROI calculators","de":"ROI-Rechner","ru":"ROI калькуляторы"},"matrix":{"en":"Comparison matrix","de":"Vergleichsmatrix","ru":"Матрица сравнения","_type":"localeString"},"b4r":{"_type":"localeString","en":"Rebate 4 Reference","de":"Rebate 4 Reference","ru":"Rebate 4 Reference"},"our_social":{"_type":"localeString","en":"Our social networks","de":"Unsere sozialen Netzwerke","ru":"Наши социальные сети"},"subscribe":{"_type":"localeString","en":"Subscribe to newsletter","de":"Melden Sie sich für den Newsletter an","ru":"Подпишитесь на рассылку"},"subscribe_info":{"_type":"localeString","en":"and be the first to know about promotions, new features and recent software reviews","ru":"и узнавайте первыми об акциях, новых возможностях и свежих обзорах софта"},"policy":{"ru":"Политика конфиденциальности","_type":"localeString","en":"Privacy Policy"},"user_agreement":{"_type":"localeString","en":"Agreement","ru":"Пользовательское соглашение "},"solutions":{"ru":"Возможности","_type":"localeString","en":"Solutions"},"find":{"_type":"localeString","en":"Selection and comparison of IT product","ru":"Подбор и сравнение ИТ продукта"},"quote":{"ru":"Калькулятор цены","_type":"localeString","en":"Price calculator"},"boosting":{"ru":"Развитие бизнеса","_type":"localeString","en":"Business boosting"},"4vendors":{"ru":"поставщикам","_type":"localeString","en":"4 vendors"},"blog":{"ru":"блог","_type":"localeString","en":"blog"},"pay4content":{"ru":"платим за контент","_type":"localeString","en":"we pay for content"},"categories":{"ru":"категории","_type":"localeString","en":"categories"},"showForm":{"_type":"localeString","en":"Show form","ru":"Показать форму"},"subscribe__title":{"en":"We send a digest of actual news from the IT world once in a month!","ru":"Раз в месяц мы отправляем дайджест актуальных новостей ИТ мира!","_type":"localeString"},"subscribe__email-label":{"ru":"Email","_type":"localeString","en":"Email"},"subscribe__name-label":{"ru":"Имя","_type":"localeString","en":"Name"},"subscribe__required-message":{"_type":"localeString","en":"This field is required","ru":"Это поле обязательное"},"subscribe__notify-label":{"_type":"localeString","en":"Yes, please, notify me about news, events and propositions","ru":"Да, пожалуйста уведомляйте меня о новостях, событиях и предложениях"},"subscribe__agree-label":{"en":"By subscribing to the newsletter, you agree to the %TERMS% and %POLICY% and agree to the use of cookies and the transfer of your personal data","ru":"Подписываясь на рассылку, вы соглашаетесь с %TERMS% и %POLICY% и даете согласие на использование файлов cookie и передачу своих персональных данных*","_type":"localeString"},"subscribe__submit-label":{"ru":"Подписаться","_type":"localeString","en":"Subscribe"},"subscribe__email-message":{"ru":"Пожалуйста, введите корректный адрес электронной почты","_type":"localeString","en":"Please, enter the valid email"},"subscribe__email-placeholder":{"_type":"localeString","en":"username@gmail.com","ru":"username@gmail.com"},"subscribe__name-placeholder":{"en":"Last, first name","ru":"Имя Фамилия","_type":"localeString"},"subscribe__success":{"ru":"Вы успешно подписаны на рассылку. Проверьте свой почтовый ящик.","_type":"localeString","en":"You are successfully subscribed! Check you mailbox."},"subscribe__error":{"ru":"Не удалось оформить подписку. Пожалуйста, попробуйте позднее.","_type":"localeString","en":"Subscription is unsuccessful. Please, try again later."},"roi4presenter":{"de":"roi4presenter","ru":"roi4presenter","_type":"localeString","en":"Roi4Presenter"},"it_catalogs":{"_type":"localeString","en":"IT catalogs"},"roi4webinar":{"en":"Pitch Avatar","_type":"localeString"},"b4reference":{"_type":"localeString","en":"Bonus4Reference"}},"breadcrumbs":{"home":{"_type":"localeString","en":"Home","ru":"Главная"},"companies":{"en":"Companies","ru":"Компании","_type":"localeString"},"products":{"ru":"Продукты","_type":"localeString","en":"Products"},"implementations":{"ru":"Внедрения","_type":"localeString","en":"Deployments"},"login":{"en":"Login","ru":"Вход","_type":"localeString"},"registration":{"_type":"localeString","en":"Registration","ru":"Регистрация"},"b2b-platform":{"ru":"Портал для покупателей, поставщиков и производителей ИТ","_type":"localeString","en":"B2B platform for IT buyers, vendors and suppliers"}},"comment-form":{"title":{"en":"Leave comment","ru":"Оставить комментарий","_type":"localeString"},"firstname":{"en":"First name","ru":"Имя","_type":"localeString"},"lastname":{"_type":"localeString","en":"Last name","ru":"Фамилия"},"company":{"ru":"Компания","_type":"localeString","en":"Company name"},"position":{"ru":"Должность","_type":"localeString","en":"Position"},"actual-cost":{"_type":"localeString","en":"Actual cost","ru":"Фактическая стоимость"},"received-roi":{"ru":"Полученный ROI","_type":"localeString","en":"Received ROI"},"saving-type":{"en":"Saving type","ru":"Тип экономии","_type":"localeString"},"comment":{"ru":"Комментарий","_type":"localeString","en":"Comment"},"your-rate":{"ru":"Ваша оценка","_type":"localeString","en":"Your rate"},"i-agree":{"_type":"localeString","en":"I agree","ru":"Я согласен"},"terms-of-use":{"_type":"localeString","en":"With user agreement and privacy policy","ru":"С пользовательским соглашением и политикой конфиденциальности"},"send":{"en":"Send","ru":"Отправить","_type":"localeString"},"required-message":{"_type":"localeString","en":"{NAME} is required filed","ru":"{NAME} - это обязательное поле"}},"maintenance":{"title":{"_type":"localeString","en":"Site under maintenance","ru":"На сайте проводятся технические работы"},"message":{"en":"Thank you for your understanding","ru":"Спасибо за ваше понимание","_type":"localeString"}}},"translationsStatus":{"company":"success"},"sections":{},"sectionsStatus":{},"pageMetaData":{"company":{"title":{"_type":"localeString","en":"ROI4CIO: Company","ru":"ROI4CIO: Компания"},"meta":[{"name":"og:image","content":"https://roi4cio.com/fileadmin/templates/roi4cio/image/roi4cio-logobig.jpg"},{"name":"og:type","content":"website"}],"translatable_meta":[{"translations":{"ru":"Компания","_type":"localeString","en":"Company"},"name":"title"},{"translations":{"en":"Company description","ru":"Описание компании","_type":"localeString"},"name":"description"},{"name":"keywords","translations":{"_type":"localeString","en":"Company keywords","ru":"Ключевые слова для компании"}}]}},"pageMetaDataStatus":{"company":"success"},"subscribeInProgress":false,"subscribeError":false},"auth":{"inProgress":false,"error":false,"checked":true,"initialized":false,"user":{},"role":null,"expires":null},"products":{"productsByAlias":{},"aliases":{},"links":{},"meta":{},"loading":false,"error":null,"useProductLoading":false,"sellProductLoading":false,"templatesById":{},"comparisonByTemplateId":{}},"filters":{"filterCriterias":{"loading":false,"error":null,"data":{"price":{"min":0,"max":6000},"users":{"loading":false,"error":null,"ids":[],"values":{}},"suppliers":{"loading":false,"error":null,"ids":[],"values":{}},"vendors":{"loading":false,"error":null,"ids":[],"values":{}},"roles":{"id":200,"title":"Roles","values":{"1":{"id":1,"title":"User","translationKey":"user"},"2":{"id":2,"title":"Supplier","translationKey":"supplier"},"3":{"id":3,"title":"Vendor","translationKey":"vendor"}}},"categories":{"flat":[],"tree":[]},"countries":{"loading":false,"error":null,"ids":[],"values":{}}}},"showAIFilter":false},"companies":{"companiesByAlias":{"elkor-ua":{"id":3956,"title":"ELCORE Group","logoURL":"https://old.roi4cio.com/uploads/roi/company/Elcore_simple_logo_1.png","alias":"elkor-ua","address":"","roles":[{"id":2,"type":"supplier"}],"description":"<p>The international holding ELCORE GROUP is established in 2006. The holding company openly and successfully operates offices in Moldova, Georgia, Uzbekistan, Tajikistan, Armenia, Ukraine, Kazakhstan, Azerbaijan, Türkiye, Mongolia. The distributor offers customers a unique combination of a quality product, effective warranty and post-warranty service and qualified technical support through the creation of strong local teams. In the tasks of ELCORE is worth transferring the successful experience in developing a sales channel.</p>","companyTypes":["supplier"],"products":{},"vendoredProductsCount":0,"suppliedProductsCount":166,"supplierImplementations":[],"vendorImplementations":[],"userImplementations":[],"userImplementationsCount":0,"supplierImplementationsCount":0,"vendorImplementationsCount":0,"vendorPartnersCount":11,"supplierPartnersCount":0,"b4r":0,"categories":{"4":{"id":4,"title":"Data center","description":" A data center (or datacenter) is a facility composed of networked computers and storage that businesses or other organizations use to organize, process, store and disseminate large amounts of data. A business typically relies heavily upon the applications, services and data contained within a data center, making it a focal point and critical asset for everyday operations.\r\nData centers are not a single thing, but rather, a conglomeration of elements. At a minimum, data centers serve as the principal repositories for all manner of IT equipment, including servers, storage subsystems, networking switches, routers and firewalls, as well as the cabling and physical racks used to organize and interconnect the IT equipment. A data center must also contain an adequate infrastructure, such as power distribution and supplemental power subsystems, including electrical switching; uninterruptable power supplies; backup generators and so on; ventilation and data center cooling systems, such as computer room air conditioners; and adequate provisioning for network carrier (telco) connectivity. All of this demands a physical facility with physical security and sufficient physical space to house the entire collection of infrastructure and equipment.","materialsDescription":" <span style=\"font-weight: bold;\">What are the requirements for modern data centers?</span>\r\nModernization and data center transformation enhances performance and energy efficiency.\r\nInformation security is also a concern, and for this reason a data center has to offer a secure environment which minimizes the chances of a security breach. A data center must therefore keep high standards for assuring the integrity and functionality of its hosted computer environment.\r\nIndustry research company International Data Corporation (IDC) puts the average age of a data center at nine years old. Gartner, another research company, says data centers older than seven years are obsolete. The growth in data (163 zettabytes by 2025) is one factor driving the need for data centers to modernize.\r\nFocus on modernization is not new: Concern about obsolete equipment was decried in 2007, and in 2011 Uptime Institute was concerned about the age of the equipment therein. By 2018 concern had shifted once again, this time to the age of the staff: "data center staff are aging faster than the equipment."\r\n<span style=\"font-style: italic;\"><span style=\"font-weight: bold;\">Meeting standards for data centers</span></span>\r\nThe Telecommunications Industry Association's Telecommunications Infrastructure Standard for Data Centers specifies the minimum requirements for telecommunications infrastructure of data centers and computer rooms including single tenant enterprise data centers and multi-tenant Internet hosting data centers. The topology proposed in this document is intended to be applicable to any size data center.\r\nTelcordia GR-3160, NEBS Requirements for Telecommunications Data Center Equipment and Spaces, provides guidelines for data center spaces within telecommunications networks, and environmental requirements for the equipment intended for installation in those spaces. These criteria were developed jointly by Telcordia and industry representatives. They may be applied to data center spaces housing data processing or Information Technology (IT) equipment. The equipment may be used to:\r\n<ul><li>Operate and manage a carrier's telecommunication network</li><li>Provide data center based applications directly to the carrier's customers</li><li>Provide hosted applications for a third party to provide services to their customers</li><li>Provide a combination of these and similar data center applications</li></ul>\r\n<span style=\"font-style: italic;\"><span style=\"font-weight: bold;\">Data center transformation</span></span>\r\nData center transformation takes a step-by-step approach through integrated projects carried out over time. This differs from a traditional method of data center upgrades that takes a serial and siloed approach. The typical projects within a data center transformation initiative include standardization/consolidation, virtualization, automation and security.\r\n<ul><li>Standardization/consolidation: Reducing the number of data centers and avoiding server sprawl (both physical and virtual) often includes replacing aging data center equipment, and is aided by standardization.</li><li>Virtualization: Lowers capital and operational expenses, reduce energy consumption. Virtualized desktops can be hosted in data centers and rented out on a subscription basis. Investment bank Lazard Capital Markets estimated in 2008 that 48 percent of enterprise operations will be virtualized by 2012. Gartner views virtualization as a catalyst for modernization.</li><li>Automating: Automating tasks such as provisioning, configuration, patching, release management and compliance is needed, not just when facing fewer skilled IT workers.</li><li>Securing: Protection of virtual systems is integrated with existing security of physical infrastructures.</li></ul>\r\n<span style=\"font-style: italic;\"><span style=\"font-weight: bold;\">Machine room</span></span>\r\nThe term "Machine Room" is at times used to refer to the large room within a Data Center where the actual Central Processing Unit is located; this may be separate from where high-speed printers are located. Air conditioning is most important in the machine room.\r\nAside from air-conditioning, there must be monitoring equipment, one type of which is to detect water prior to flood-level situations. One company, for several decades, has had share-of-mind: Water Alert. The company, as of 2018, has 2 competing manufacturers (Invetex, Hydro-Temp) and 3 competing distributors (Longden,Northeast Flooring, Slayton). ","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Data_center.png","alias":"data-center"},"24":{"id":24,"title":"DLP - Data Leak Prevention","description":"Data leak prevention (DLP) is a suite of technologies aimed at stemming the loss of sensitive information that occurs in enterprises across the globe. By focusing on the location, classification and monitoring of information at rest, in use and in motion, this solution can go far in helping an enterprise get a handle on what information it has, and in stopping the numerous leaks of information that occur each day. DLP is not a plug-and-play solution. The successful implementation of this technology requires significant preparation and diligent ongoing maintenance. Enterprises seeking to integrate and implement DLP should be prepared for a significant effort that, if done correctly, can greatly reduce risk to the organization. Those implementing the solution must take a strategic approach that addresses risks, impacts and mitigation steps, along with appropriate governance and assurance measures.","materialsDescription":" <span style=\"font-weight: bold;\">How to protect the company from internal threats associated with leakage of confidential information?</span>\r\nIn order to protect against any threat, you must first realize its presence. Unfortunately, not always the management of companies is able to do this if it comes to information security threats. The key to successfully protecting against information leaks and other threats lies in the skillful use of both organizational and technical means of monitoring personnel actions.\r\n<span style=\"font-weight: bold;\">How should the personnel management system in the company be organized to minimize the risks of leakage of confidential information?</span>\r\nA company must have a special employee responsible for information security, and a large department must have a department directly reporting to the head of the company.\r\n<span style=\"font-weight: bold;\">Which industry representatives are most likely to encounter confidential information leaks?</span>\r\nMore than others, representatives of such industries as industry, energy, and retail trade suffer from leaks. Other industries traditionally exposed to leakage risks — banking, insurance, IT — are usually better at protecting themselves from information risks, and for this reason they are less likely to fall into similar situations.\r\n<span style=\"font-weight: bold;\">What should be adequate measures to protect against leakage of information for an average company?</span>\r\nFor each organization, the question of protection measures should be worked out depending on the specifics of its work, but developing information security policies, instructing employees, delineating access to confidential data and implementing a DLP system are necessary conditions for successful leak protection for any organization. Among all the technical means to prevent information leaks, the DLP system is the most effective today, although its choice must be taken very carefully to get the desired result. So, it should control all possible channels of data leakage, support automatic detection of confidential information in outgoing traffic, maintain control of work laptops that temporarily find themselves outside the corporate network...\r\n<span style=\"font-weight: bold;\">Is it possible to give protection against information leaks to outsourcing?</span>\r\nFor a small company, this may make sense because it reduces costs. However, it is necessary to carefully select the service provider, preferably before receiving recommendations from its current customers.\r\n<span style=\"font-weight: bold;\">What data channels need to be monitored to prevent leakage of confidential information?</span>\r\nAll channels used by employees of the organization - e-mail, Skype, HTTP World Wide Web protocol ... It is also necessary to monitor the information recorded on external storage media and sent to print, plus periodically check the workstation or laptop of the user for files that are there saying should not.\r\n<span style=\"font-weight: bold;\">What to do when the leak has already happened?</span>\r\nFirst of all, you need to notify those who might suffer - silence will cost your reputation much more. Secondly, you need to find the source and prevent further leakage. Next, you need to assess where the information could go, and try to somehow agree that it does not spread further. In general, of course, it is easier to prevent the leakage of confidential information than to disentangle its consequences.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Data_Leak_Prevention.png","alias":"dlp-data-leak-prevention"},"27":{"id":27,"title":"ODMS - Operational Database Management System","description":"Operational database management systems (also referred to as OLTP On-Line Transaction Processing databases) are used to update data in real-time. These types of databases allow users to do more than simply view archived data. Operational databases allow you to modify that data (add, change or delete data), doing it in real-time. OLTP databases provide transactions as the main abstraction to guarantee data consistency that guarantees the so-called ACID properties. Basically, the consistency of the data is guaranteed in the case of failures and/or concurrent access to the data.\r\nSince the early 90s, the operational database software market has been largely taken over by SQL engines. Today, the operational DBMS market (formerly OLTP) is evolving dramatically, with new, innovative entrants and incumbents supporting the growing use of unstructured data and NoSQL DBMS engines, as well as XML databases and NewSQL databases. NoSQL databases typically have focused on scalability and have renounced to data consistency by not providing transactions as OLTP systems do. Operational databases are increasingly supporting distributed database architecture that can leverage distribution to provide high availability and fault tolerance through replication and scale-out ability.\r\nThe growing role of operational databases in the IT industry is moving fast from legacy databases to real-time operational databases capable to handle distributed web and mobile demand and to address Big data challenges. Recognizing this, Gartner started to publish the Magic Quadrant for Operational Database Management Systems in October 2013.\r\nOperational databases are used to store, manage and track real-time business information. For example, a company might have an operational database used to track warehouse/stock quantities. As customers order products from an online web store, an operational database can be used to keep track of how many items have been sold and when the company will need to reorder stock. An operational database stores information about the activities of an organization, for example, customer relationship management transactions or financial operations, in a computer database.\r\nOperational databases allow a business to enter, gather, and retrieve large quantities of specific information, such as company legal data, financial data, call data records, personal employee information, sales data, customer data, data on assets and much other information. An important feature of storing information in an operational database is the ability to share information across the company and over the Internet. Operational databases can be used to manage mission-critical business data, to monitor activities, to audit suspicious transactions, or to review the history of dealings with a particular customer. They can also be part of the actual process of making and fulfilling a purchase, for example in e-commerce.","materialsDescription":" <span style=\"font-weight: bold;\">What is DBMS used for?</span>\r\nDBMS, commonly known as Database Management System, is an application system whose main purpose revolves around the data. This is a system that allows its users to store the data, define it, retrieve it and update the information about the data inside the database.\r\n<span style=\"font-weight: bold;\">What is meant by a Database?</span>\r\nIn simple terms, Database is a collection of data in some organized way to facilitate its user’s to easily access, manage and upload the data.\r\n<span style=\"font-weight: bold;\">Why is the use of DBMS recommended? Explain by listing some of its major advantages.</span>\r\nSome of the major advantages of DBMS are as follows:\r\n<ul><li><span style=\"font-style: italic;\"><span style=\"font-weight: bold;\">Controlled Redundancy:</span></span> DBMS supports a mechanism to control the redundancy of data inside the database by integrating all the data into a single database and as data is stored at only one place, the duplicity of data does not happen.</li><li><span style=\"font-style: italic;\"><span style=\"font-weight: bold;\">Data Sharing:</span></span> Sharing of data among multiple users simultaneously can also be done in DBMS as the same database will be shared among all the users and by different application programs.</li><li> Backup and Recovery Facility: DBMS minimizes the pain of creating the backup of data again and again by providing a feature of ‘backup and recovery’ which automatically creates the data backup and restores the data whenever required.</li><li><span style=\"font-style: italic;\"><span style=\"font-weight: bold;\">Enforcement of Integrity Constraints:</span></span> Integrity Constraints are very important to be enforced on the data so that the refined data after putting some constraints are stored in the database and this is followed by DBMS.</li><li><span style=\"font-style: italic;\"><span style=\"font-weight: bold;\">Independence of Data:</span></span> It simply means that you can change the structure of the data without affecting the structure of any of the application programs.</li></ul>\r\n<span style=\"font-weight: bold;\">What is the purpose of normalization in DBMS?</span>\r\nNormalization is the process of analyzing the relational schemas which are based on their respective functional dependencies and the primary keys in order to fulfill certain properties.\r\n<span style=\"font-style: italic;\"><span style=\"font-weight: bold;\">The properties include:</span></span>\r\n<ul><li>To minimize the redundancy of the Data.</li><li>To minimize the Insert, Delete and Update Anomalies.</li></ul>","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_ODMS.png","alias":"odms-operational-database-management-system"},"32":{"id":32,"title":"IT outsourcing","description":"<span style=\"font-weight: bold; \">IT outsourcing</span> is the use of external service providers to effectively deliver IT-enabled business process, application service and infrastructure solutions for business outcomes.\r\nOutsourcing, which also includes utility services, software as a service and cloud-enabled outsourcing, helps clients to develop the right sourcing strategies and vision, select the right IT service providers, structure the best possible contracts, and govern deals for sustainable win-win relationships with external providers.\r\nOutsourcing can enable enterprises to reduce costs, accelerate time to market, and take advantage of external expertise, assets and/or intellectual property. IT outsourcing can be implemented both ways: outsides or within the country. \r\nIT outsourcing vendors can provide either a fully managed service, meaning they take full responsibility of all IT maintenance and support, or they can provide additional support for an internal IT team when needed, which is known as co-sourced IT support. A company using IT outsourcing can choose to use one provider for all their IT functions or split the work among multiple providers. \r\n<span style=\"font-weight: bold;\">Specific IT services typically outsourced include:</span>\r\n<ul><li>Application development</li><li>Web hosting</li><li>Application support</li><li>Database development</li><li>Telecommunications</li><li>Networking</li><li>Disaster recovery</li><li>Security</li></ul>\r\n<p class=\"align-center\"><span style=\"font-weight: bold; \">Reasons for Outsourcing</span></p>\r\n<span style=\"font-weight: bold; \">To Reduce Cost.</span> More often than not, outsourcing means saving money. This is often due to lower labor costs, cheaper infrastructure, or an advantageous tax system in the outsourcing location.<br /><span style=\"font-weight: bold; \">To Access Skills That Are Unavailable Locally.</span> Resources that are scarce at home can sometimes be found in abundance elsewhere, meaning you can easily reach them through outsourcing.<br /><span style=\"font-weight: bold; \">To Better Use Internal Resources</span>. By delegating some of your business processes to a third party, you’ll give your in-house employees the opportunity to focus on more meaningful tasks.<br /><span style=\"font-weight: bold; \">To Accelerate Business Processes.</span> When you stop wasting time on mundane, time-consuming processes, you’ll be able to move forward with your core offering a lot faster.<br /><span style=\"font-weight: bold; \">To Share Risks.</span> When you delegate a part of non-focus functionality by outsourcing it to a third-party vendor, you give away the responsibility and related risks.","materialsDescription":"<h3 class=\"align-center\">What are the Types of IT Outsourcing?</h3>\r\n<p class=\"align-left\"><span style=\"font-weight: bold; \">Project-Based Model.</span> The client hires a team to implement the part of work that is already planned and defined. The project manager from the outsourced team carries full responsibility for the quality and performance of the project.</p>\r\n<p class=\"align-left\"><span style=\"font-weight: bold; \">Dedicated Team Model.</span> The client hires a team that will create a project for them, and they will work only on that project. Unlike the project-based model, a dedicated team is more engaged in your project. In this model, an outsourced team becomes your technical and product advisor. So it can offer ideas and suggest alternative solutions.</p>\r\n<p class=\"align-left\"><span style=\"font-weight: bold; \">Outstaff Model.</span> It's a type of outsourcing in IT when you don't need a full-fledged development team and hire separate specialists. Sometimes the project requires finding a couple of additional professionals, and you're free to hire outstaff workers to cover that scope of work.</p>\r\n<h3 class=\"align-center\"><span style=\"font-weight: bold; \">What are IT Outsourcing examples?</span></h3>\r\nThe individual or company that becomes your outsourcing partner can be located anywhere in the world — one block away from your office or on another continent.\r\nA Bay Area-based startup partnering with an app development team in Utah and a call center in the Philippines, or a UK-based digital marketing agency hiring a Magento developer from Ukraine are both examples of outsourcing.\r\n<h3 class=\"align-center\">Why You Should Use IT Outsourcing</h3>\r\nNow that you know what IT outsourcing is, its models, and types, it's time to clarify why you need to outsource and whether you really need it. Let's go over a few situations that suggest when to opt for IT outsourcing.\r\n<ul><li><span style=\"font-weight: bold;\">You are a domain expert with idea</span></li></ul>\r\nIf you're an industry expert with the idea that solves a real problem, IT outsourcing is your choice. In this case, your main goal is to enter the market and test the solution fast. An outsourced team will help you validate the idea, build an MVP to check the hypothesis, and implement changes in your product according to market needs. It saves you money, time and lets you reach the goal.\r\n<ul><li><span style=\"font-weight: bold;\">You have an early-stage startup</span></li></ul>\r\nIt's a common case that young startups spend money faster than they get a solid team and a ready-to-market product. The Failory found that financial problems are the 3rd reason why startup fails. So it makes more sense to reduce costs by hiring an outsourced team of professionals while your business lives on investor's money. You may employ a full-cycle product development studio covering all the blind spots and bringing your product to life.\r\n<ul><li><span style=\"font-weight: bold;\">You need a technical support</span></li></ul>\r\nEven if you already have a ready solution, but it demands some technical improvements – frameworks for backend components, new language, integrations with enterprise software, UX&UI design – it makes more sense to find an experienced partner. There are many functions that IT outsourcing can cover, and again it saves you the time you'd otherwise spend on looking for qualified staff.<br /><br /><br />","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_IT_outsourcing.png","alias":"it-outsourcing"},"33":{"id":33,"title":"UPS - Uninterruptible Power Supply","description":"An uninterruptible power supply (UPS), also known as a battery backup, provides backup power when your regular power source fails or voltage drops to an unacceptable level. A UPS allows for the safe, orderly shutdown of a computer and connected equipment. The size and design of a UPS determine how long it will supply power.\r\nDifferent UPS topologies provide specific levels of power protection.\r\nStandby is the most basic UPS topology. A standby UPS resorts to battery backup power in the event of common power problems such as a blackout, voltage sag, or voltage surge. When incoming utility power drops below or surges above safe voltage levels, the UPS switches to DC battery power and then inverts it to AC power to run connected equipment. These models are designed for consumer electronics, entry-level computers, POS systems, security systems, and other basic electronic equipment.\r\nA line-interactive UPS incorporates technology which allows it to correct minor power fluctuations (under-voltages and over voltages) without switching to battery. This type of UPS has an autotransformer that regulates low voltages (e.g., brownouts) and over voltages (e.g., swells) without having to switch to battery. Line-interactive UPS models are typically used for consumer electronics, PCs, gaming systems, home theater electronics, network equipment, and entry-to-mid-range servers. They provide power during such events as a blackout, voltage sag, voltage surge, or over-voltage.\r\nA double-conversion (online) UPS provides consistent, clean, and near-perfect power regardless of the condition of incoming power. This UPS converts incoming AC power to DC, and then back to AC. UPS systems with this technology operate on isolated DC power 100 percent of the time and have a zero transfer time because they never need to switch to DC power. Double-conversion UPS systems are designed to protect mission-critical IT equipment, data center installations, high-end servers, large telecom installations and storage applications, and advanced network equipment from damage caused by a power blackout, voltage sag, voltage surge, over-voltage, voltage spike, frequency noise, frequency variation, or harmonic distortion.","materialsDescription":" <span style=\"font-weight: bold;\">What is a UPS system?</span>\r\nUPS stands for an uninterruptible power supply. This means that a UPS system is designed to keep the power running at all times. For instance, load shedding will be a problem of the past with our wide variety of products and solutions keeping your business moving.\r\n<span style=\"font-weight: bold;\">Where is a UPS used?</span>\r\nUPS systems can be used anywhere that needs to ensure that the power stays on. The most common applications are where power is critical to avoid infrastructure damage e.g. Data centers and manufacturing facilities.\r\n<span style=\"font-weight: bold;\">What is the difference between a battery and a UPS?</span>\r\nA battery is a device that stores energy, a UPS detects when there is no longer any power coming from the mains and switches over to the UPS batteries.\r\n<span style=\"font-weight: bold;\">Can I use a UPS for 6-7 hours?</span>\r\nIf the power requirement is low and the UPS is overrated, possibly, but normally running a UPS for this long requires so many UPS batteries it becomes unfeasible both financially and physically. It would be best to run a standby generator alongside your UPS to achieve this.\r\n<span style=\"font-weight: bold;\">What is the difference between a UPS and an Inverter?</span>\r\nThe UPS and inverter both provide the backup supply to the electrical system. The major difference between the UPS and inverter is that the UPS switches from the main supply to the battery immediately, but the inverter takes much longer.\r\n<span style=\"font-weight: bold;\">What is a non-critical load in a power system?</span>\r\nA non-critical load is an electrical device or devices, that aren’t key to keeping a business running or won’t be damaged by a power cut. In short, it doesn’t matter if these devices lose power in an outage.\r\n<span style=\"font-weight: bold;\">What is backup power?</span>\r\nBackup power is a term that simply means, a source of power if the main power source fails. This can be anything from some AA batteries in your mains powered alarm clock to UPS system and standby generator that is connected to your data center.\r\n<span style=\"font-weight: bold;\">What is the difference between a standby generator and a UPS system?</span>\r\nWhile both protect against a power cut, a UPS is an immediate, short term solution, provide power straight away for as long as its UPS batteries have a charge. A standby generator is a longer turn solution, that is slower to start up but will provide power for as long as it has fuel.<br /><br />","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_UPS.png","alias":"ups-uninterruptible-power-supply"},"40":{"id":40,"title":"Endpoint security","description":"In network security, endpoint security refers to a methodology of protecting the corporate network when accessed via remote devices such as laptops or other wireless and mobile devices. Each device with a remote connecting to the network creates a potential entry point for security threats. Endpoint security is designed to secure each endpoint on the network created by these devices.\r\nUsually, endpoint security is a security system that consists of security software, located on a centrally managed and accessible server or gateway within the network, in addition to client software being installed on each of the endpoints (or devices). The server authenticates logins from the endpoints and also updates the device software when needed. While endpoint security software differs by vendor, you can expect most software offerings to provide antivirus, antispyware, firewall and also a host intrusion prevention system (HIPS).\r\nEndpoint security is becoming a more common IT security function and concern as more employees bring consumer mobile devices to work and companies allow its mobile workforce to use these devices on the corporate network.<br /><br />","materialsDescription":"<span style=\"font-weight: bold;\">What are endpoint devices?</span>\r\nAny device that can connect to the central business network is considered an endpoint. Endpoint devices are potential entry points for cybersecurity threats and need strong protection because they are often the weakest link in network security.\r\n<span style=\"font-weight: bold;\">What is endpoint security management?</span>\r\nA set of rules defining the level of security that each device connected to the business network must comply with. These rules may include using an approved operating system (OS), installing a virtual private network (VPN), or running up-to-date antivirus software. If the device connecting to the network does not have the desired level of protection, it may have to connect via a guest network and have limited network access.\r\n<span style=\"font-weight: bold;\">What is endpoint security software?</span>\r\nPrograms that make sure your devices are protected. Endpoint protection software may be cloud-based and work as SaaS (Software as a Service). Endpoint security software can also be installed on each device separately as a standalone application.\r\n<span style=\"font-weight: bold;\">What is endpoint detection and response (EDR)?</span>\r\nEndpoint detection and response (EDR) solutions analyze files and programs, and report on any threats found. EDR solutions monitor continuously for advanced threats, helping to identify attacks at an early stage and respond rapidly to a range of threats.<br /><br />","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Endpoint_security.png","alias":"endpoint-security"},"42":{"id":42,"title":"UTM - Unified threat management","description":"<span style=\"font-weight: bold; \">UTM (Unified Threat Management)</span> system is a type of network hardware appliance, virtual appliance or cloud service that protects businesses from security threats in a simplified way by combining and integrating multiple security services and features.\r\nUnified threat management <span style=\"font-weight: bold; \">devices </span>are often packaged as network security appliances that can help protect networks against combined security threats, including malware and attacks that simultaneously target separate parts of the network.\r\nUTM <span style=\"font-weight: bold; \">cloud services</span> and virtual network appliances are becoming increasingly popular for network security, especially for smaller and medium-sized businesses. They both do away with the need for on-premises network security appliances, yet still provide centralized control and ease of use for building network security defense in depth. While UTM systems and <span style=\"font-weight: bold; \">next-generation firewalls (NGFWs)</span> are sometimes comparable, unified threat management device includes added security features that NGFWs don't offer.\r\nOriginally developed to fill the network security gaps left by traditional firewalls, NGFWs usually include application intelligence and intrusion prevention systems, as well as denial-of-service protection. Unified threat management devices offer multiple layers of network security, including next-generation firewalls, intrusion detection/prevention systems, antivirus, virtual private networks (VPN), spam filtering and URL filtering for web content.\r\nUnified threat management appliance has gained traction in the industry due to the emergence of blended threats, which are combinations of different types of malware and attacks that target separate parts of the network simultaneously. By creating a single point of defense and providing a single console, unified security management make dealing with varied threats much easier.\r\nUnified threat management products provide increased protection and visibility, as well as control over network security, reducing complexity. Unified threat management system typically does this via inspection methods that address different types of threats. These methods include:\r\n<ul><li><span style=\"font-weight: bold; \">Flow-based inspection,</span> also known as stream-based inspection, samples data that enters a UTM device, and then uses pattern matching to determine whether there is malicious content in the data flow.</li><li> <span style=\"font-weight: bold; \">Proxy-based inspection</span> acts as a proxy to reconstruct the content entering a UTM device, and then executes a full inspection of the content to search for potential security threats. If the content is clean, the device sends the content to the user. However, if a virus or other security threat is detected, the device removes the questionable content, and then sends the file or webpage to the user.</li></ul>\r\n\r\n","materialsDescription":"<h1 class=\"align-center\"> How UTM is deployed?</h1>\r\nBusinesses can implement UTM as a UTM appliance that connects to a company's network, as a software program running on an existing network server, or as a service that works in a cloud environment.\r\nUTMs are particularly useful in organizations that have many branches or retail outlets that have traditionally used dedicated WAN, but are increasingly using public internet connections to the headquarters/data center. Using a UTM in these cases gives the business more insight and better control over the security of those branch or retail outlets.\r\nBusinesses can choose from one or more methods to deploy UTM to the appropriate platforms, but they may also find it most suitable to select a combination of platforms. Some of the options include installing unified threat management software on the company's servers in a data center; using software-based UTM products on cloud-based servers; using traditional UTM hardware appliances that come with preintegrated hardware and software; or using virtual appliances, which are integrated software suites that can be deployed in virtual environments.\r\n<h1 class=\"align-center\">Benefits of Using a Unified Threat Management Solution</h1>\r\nUTM solutions offer unique benefits to small and medium businesses that are looking to enhance their security programs. Because the capabilities of multiple specialized programs are contained in a single appliance, UTM threat management reduces the complexity of a company’s security system. Similarly, having one program that controls security reduces the amount of training that employees receive when being hired or migrating to a new system and allows for easy management in the future. This can also save money in the long run as opposed to having to buy multiple devices.\r\nSome UTM solutions provide additional benefits for companies in strictly regulated industries. Appliances that use identity-based security to report on user activity while enabling policy creation based on user identity meet the requirements of regulatory compliance such as HIPPA, CIPA, and GLBA that require access controls and auditing that meet control data leakage.\r\nUTM solutions also help to protect networks against combined threats. These threats consist of different types of malware and attacks that target separate parts of the network simultaneously. When using separate appliances for each security wall, preventing these combined attacks can be difficult. This is because each security wall has to be managed individually in order to remain up-to-date with the changing security threats. Because it is a single point of defense, UTM’s make dealing with combined threats easier.\r\n\r\n","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_UTM.jpg","alias":"utm-unified-threat-management"},"43":{"id":43,"title":"Data Encryption","description":"<span style=\"font-weight: bold;\">Data encryption</span> translates data into another form, or code, so that only people with access to a secret key (formally called a decryption key) or password can read it. Encrypted data is commonly referred to as ciphertext, while unencrypted data is called plaintext. Currently, encryption is one of the most popular and effective data security methods used by organizations. \r\nTwo main types of data encryption exist - <span style=\"font-weight: bold;\">asymmetric encryption</span>, also known as public-key encryption, and <span style=\"font-weight: bold;\">symmetric encryption</span>.<br />The purpose of data encryption is to protect digital data confidentiality as it is stored on computer systems and transmitted using the internet or other computer networks. The outdated data encryption standard (DES) has been replaced by modern encryption algorithms that play a critical role in the security of IT systems and communications.\r\nThese algorithms provide confidentiality and drive key security initiatives including authentication, integrity, and non-repudiation. Authentication allows for the verification of a message’s origin, and integrity provides proof that a message’s contents have not changed since it was sent. Additionally, non-repudiation ensures that a message sender cannot deny sending the message.\r\nData protection software for data encryption can provide encryption of devices, email, and data itself. In many cases, these encryption functionalities are also met with control capabilities for devices, email, and data. \r\nCompanies and organizations face the challenge of protecting data and preventing data loss as employees use external devices, removable media, and web applications more often as a part of their daily business procedures. Sensitive data may no longer be under the company’s control and protection as employees copy data to removable devices or upload it to the cloud. As a result, the best data loss prevention solutions prevent data theft and the introduction of malware from removable and external devices as well as web and cloud applications. In order to do so, they must also ensure that devices and applications are used properly and that data is secured by auto-encryption even after it leaves the organization.\r\nEncryption software program encrypts data or files by working with one or more encryption algorithms. Security personnel use it to protect data from being viewed by unauthorized users.\r\nTypically, each data packet or file encrypted via data encryption programs requires a key to be decrypted to its original form. This key is generated by the software itself and shared between the data/file sender and receiver. Thus, even if the encrypted data is extracted or compromised, its original content cannot be retrieved without the encryption key. File encryption, email encryption, disk encryption and network encryption are widely used types of data encryption software.<br /><br />","materialsDescription":"<h1 class=\"align-center\"><span style=\"font-weight: normal;\">What is Encryption software?</span></h1>\r\nEncryption software is software that uses cryptography to prevent unauthorized access to digital information. Cryptography is used to protect digital information on computers as well as the digital information that is sent to other computers over the Internet.There are many software products which provide encryption. Software encryption uses a cipher to obscure the content into ciphertext. One way to classify this type of software is by the type of cipher used. Ciphers can be divided into two categories: <span style=\"font-weight: bold;\">public key ciphers</span> (also known as asymmetric ciphers), and <span style=\"font-weight: bold;\">symmetric key ciphers</span>. Encryption software can be based on either public key or symmetric key encryption.\r\nAnother way to classify crypto software is to categorize its purpose. Using this approach, software encryption may be classified into software which encrypts "<span style=\"font-weight: bold;\">data in transit</span>" and software which encrypts "<span style=\"font-weight: bold;\">data at rest</span>". Data in transit generally uses public key ciphers, and data at rest generally uses symmetric key ciphers.\r\nSymmetric key ciphers can be further divided into stream ciphers and block ciphers. Stream ciphers typically encrypt plaintext a bit or byte at a time, and are most commonly used to encrypt real-time communications, such as audio and video information. The key is used to establish the initial state of a keystream generator, and the output of that generator is used to encrypt the plaintext. Block cipher algorithms split the plaintext into fixed-size blocks and encrypt one block at a time. For example, AES processes 16-byte blocks, while its predecessor DES encrypted blocks of eight bytes.<br />There is also a well-known case where PKI is used for data in transit of data at rest.\r\n<h1 class=\"align-center\"><span style=\"font-weight: normal;\">How Data Encryption is used?</span></h1>\r\nThe purpose of data encryption is to deter malicious or negligent parties from accessing sensitive data. An important line of defense in a cybersecurity architecture, encryption makes using intercepted data as difficult as possible. It can be applied to all kinds of data protection needs ranging from classified government intel to personal credit card transactions. Data encryption software, also known as an encryption algorithm or cipher, is used to develop an encryption scheme which theoretically can only be broken with large amounts of computing power.\r\nEncryption is an incredibly important tool for keeping your data safe. When your files are encrypted, they are completely unreadable without the correct encryption key. If someone steals your encrypted files, they won’t be able to do anything with them.\r\nThere different types of encryption: hardware and software. Both offer different advantages. So, what are these methods and why do they matter?\r\n<h1 class=\"align-center\"><span style=\"font-weight: normal;\">Software Encryption</span></h1>\r\n<p class=\"align-left\">As the name implies, software encryption uses features of encryption software to encrypt your data. Cryptosoft typically relies on a password; give the right password, and your files will be decrypted, otherwise they remain locked. With encryption enabled, it is passed through a special algorithm that scrambles your data as it is written to disk. The same software then unscrambles data as it is read from the disk for an authenticated user.</p>\r\n<p class=\"align-left\"><span style=\"font-weight: bold;\">Pros.</span>Crypto programs is typically quite cheap to implement, making it very popular with developers. In addition, software-based encryption routines do not require any additional hardware.<span style=\"font-weight: bold;\"></span></p>\r\n<p class=\"align-left\"><span style=\"font-weight: bold;\">Cons.</span>Types of encryption software is only as secure as the rest of your computer or smartphone. If a hacker can crack your password, the encryption is immediately undone.<br />Software encryption tools also share the processing resources of your computer, which can cause the entire machine to slow down as data is encrypted/decrypted. You will also find that opening and closing encrypted files is much slower than normal because the process is relatively resource intensive, particularly for higher levels of encryption</p>\r\n<h1 class=\"align-center\"><span style=\"font-weight: normal;\">Hardware encryption</span></h1>\r\n<p class=\"align-left\">At the heart of hardware encryption is a separate processor dedicated to the task of authentication and encryption. Hardware encryption is increasingly common on mobile devices. <br />The encryption protection technology still relies on a special key to encrypt and decrypt data, but this is randomly generated by the encryption processor. Often times, hardware encryption devices replace traditional passwords with biometric logons (like fingerprints) or a PIN number that is entered on an attached keypad<span style=\"font-weight: bold;\"></span></p>\r\n<p class=\"align-left\"><span style=\"font-weight: bold;\">Pros.</span>Hardware offers strong encryption, safer than software solutions because the encryption process is separate from the rest of the machine. This makes it much harder to intercept or break. </p>\r\n<p class=\"align-left\">The use of a dedicated processor also relieves the burden on the rest of your device, making the encryption and decryption process much faster.<span style=\"font-weight: bold;\"></span></p>\r\n<p class=\"align-left\"><span style=\"font-weight: bold;\">Cons.</span>Typically, hardware-based encrypted storage is much more expensive than a software encryption tools. <br />If the hardware decryption processor fails, it becomes extremely hard to access your information.<span style=\"font-weight: bold;\"></span></p>\r\n<p class=\"align-left\"><span style=\"font-weight: bold;\">The Data Recovery Challenge. </span>Encrypted data is a challenge to recover. Even by recovering the raw sectors from a failed drive, it is still encrypted, which means it is still unreadable. </p>\r\n<p class=\"align-left\">Hardware encrypted devices don’t typically have these additional recovery options. Many have a design to prevent decryption in the event of a component failure, stopping hackers from disassembling them. The fastest and most effective way to deal with data loss on an encrypted device is to ensure you have a complete backup stored somewhere safe. For your PC, this may mean copying data to another encrypted device. For other devices, like your smartphone, backing up to the Cloud provides a quick and simple economy copy that you can restore from. As an added bonus, most Cloud services now encrypt their users’ data too. <br /><br /><br /></p>","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Data_Encryption.png","alias":"data-encryption"},"48":{"id":48,"title":"CCTV - Closed-circuit television","description":"CCTV (closed-circuit television) is a TV system in which signals are not publicly distributed but are monitored, primarily for surveillance and security purposes.\r\nCCTV relies on strategic placement of cameras, and observation of the camera's input on monitors somewhere. Because the cameras communicate with monitors and/or video recorders across private coaxial cable runs or wireless communication links, they gain the designation "closed-circuit" to indicate that access to their content is limited by design only to those able to see it.\r\nOlder CCTV systems used small, low-resolution black and white monitors with no interactive capabilities. Modern CCTV displays can be color, high-resolution displays and can include the ability to zoom in on an image or track something (or someone) among their features. Talk CCTV allows an overseer to speak to people within range of the camera's associated speakers.\r\nCCTV is commonly used for a variety of purposes, including:\r\n<ul><li>Maintaining perimeter security in medium- to high-secure areas and installations.</li><li>Observing the behavior of incarcerated inmates and potentially dangerous patients in medical facilities.</li><li>Traffic monitoring.</li><li>Overseeing locations that would be hazardous to a human, for example, highly radioactive or toxic industrial environments.</li><li>Building and grounds security.</li><li>Obtaining a visual record of activities in situations where it is necessary to maintain proper security or access controls (for example, in a diamond cutting or sorting operation; in banks, casinos, or airports).</li></ul>\r\nCCTV is finding increasing use in law-enforcement, for everything from traffic observation (and automated ticketing) to an observation of high-crime areas or neighborhoods. Such use of CCTV technology has fueled privacy concerns in many parts of the world, particularly in those areas in the UK and Europe where it has become a routine part of police procedure.","materialsDescription":" <span style=\"text-decoration: underline; \"><span style=\"font-weight: bold; \">Uses</span></span>\r\n<span style=\"font-weight: bold; \">Crime prevention</span>\r\nA 2009 systematic review by researchers from Northeastern University and University of Cambridge used meta-analytic techniques to pool the average effect of CCTV on crime across 41 different studies. The results indicated that\r\n<ul><li>CCTV caused a significant reduction of crime by on average 16%.</li><li>The largest effects of CCTV were found in car parks, where cameras reduced crime by on average 51%.</li><li>CCTV schemes in other public settings had small and non-statistically significant effects on crime: 7% reduction in city and town centers and 23% reduction in public transport settings.</li><li>When sorted by country, systems in the United Kingdom accounted for the majority of the decrease; the drop in other areas was insignificant.</li></ul>\r\nThe studies included in the meta-analysis used quasi-experimental evaluation designs that involve before-and-after measures of crime in experimental and control areas. However, several researchers have pointed to methodological problems associated with this research literature. First, researchers have argued that the British car park studies included in the meta-analysis cannot accurately control for the fact that CCTV was introduced simultaneously with a range of other security-related measures. Second, some have noted that, in many of the studies, there may be issues with selection bias since the introduction of CCTV was potentially endogenous to previous crime trends.[30] In particular, the estimated effects may be biased if CCTV is introduced in response to crime trends.\r\nIt has been argued that problems of selection bias and endogeneity can be addressed by stronger research designs such as randomized controlled trials and natural experiments. A 2017 review published in Journal of Scandinavian Studies in Criminology and Crime Prevention compiles seven studies that use such research designs. The studies included in the review found that CCTV reduced crime by 24-28% in public streets and urban subway stations. It also found that CCTV could decrease unruly behaviour in football stadiums and theft in supermarkets/mass merchant stores. However, there was no evidence of CCTV having desirable effects in parking facilities or suburban subway stations. Furthermore, the review indicates that CCTV is more effective in preventing property crimes than in violent crimes.\r\nAnother question in the effectiveness of CCTV for policing is around uptime of the system; in 2013 City of Philadelphia Auditor found that the $15M system was operational only 32% of the time. There is still much research to be done to determine the effectiveness of CCTV cameras on crime prevention before any conclusions can be drawn.\r\nThere is strong anecdotal evidence that CCTV aids in detection and conviction of offenders; indeed UK police forces routinely seek CCTV recordings after crimes. Moreover, CCTV has played a crucial role in tracing the movements of suspects or victims and is widely regarded by antiterrorist officers as a fundamental tool in tracking terrorist suspects. Large-scale CCTV installations have played a key part of the defences against terrorism since the 1970s. Cameras have also been installed on public transport in the hope of deterring crime, and in mobile police surveillance vehicles, often with automatic number plate recognition, and a network of APNI-linked cameras is used to manage London's congestion charging zone.\r\nA more open question is whether most CCTV is cost-effective. While low-quality domestic kits are cheap the professional installation and maintenance of high definition CCTV is expensive. Gill and Spriggs did a Cost-effectiveness analysis (CEA) of CCTV in crime prevention that showed little monetary saving with the installation of CCTV as most of the crimes prevented resulted in little monetary loss. Critics however noted that benefits of non-monetary value cannot be captured in a traditional Cost Effectiveness Analysis and were omitted from their study. A 2008 Report by UK Police Chiefs concluded that only 3% of crimes were solved by CCTV. In London, a Metropolitan Police report showed that in 2008 only one crime was solved per 1000 cameras. In some cases CCTV cameras have become a target of attacks themselves.\r\nCities such as Manchester in the UK are using DVR-based technology to improve accessibility for crime prevention.\r\nIn October 2009, an "Internet Eyes" website was announced which would pay members of the public to view CCTV camera images from their homes and report any crimes they witnessed. The site aimed to add "more eyes" to cameras which might be insufficiently monitored. Civil liberties campaigners criticized the idea as "a distasteful and a worrying development".\r\nIn 2013 Oaxaca hired deaf police officers to lip read conversations to uncover criminal conspiracies.\r\nIn Singapore, since 2012, thousands of CCTV cameras have helped deter loan sharks, nab litterbugs and stop illegal parking, according to government figures.\r\n<span style=\"font-weight: bold; \">Body worn</span>\r\nIn recent years, the use of body worn video cameras has been introduced for a number of uses. For example, as a new form of surveillance in law enforcement, with cameras located on a police officer's chest or head.\r\n<span style=\"font-weight: bold; \">Industrial processes</span>\r\nIndustrial processes that take place under conditions dangerous for humans are today often supervised by CCTV. These are mainly processes in the chemical industry, the interior of reactors or facilities for manufacture of nuclear fuel. Special cameras for some of these purposes include line-scan cameras and thermographic cameras which allow operators to measure the temperature of the processes. The usage of CCTV in such processes is sometimes required by law.\r\n<span style=\"font-weight: bold; \">Traffic monitoring</span>\r\nMany cities and motorway networks have extensive traffic-monitoring systems, using closed-circuit television to detect congestion and notice accidents. Many of these cameras however, are owned by private companies and transmit data to drivers' GPS systems.\r\nThe UK Highways Agency has a publicly owned CCTV network of over 3000 Pan-Tilt-Zoom cameras covering the British motorway and trunk road network. These cameras are primarily used to monitor traffic conditions and are not used as speed cameras. With the addition of fixed cameras for the active traffic management system, the number of cameras on the Highways Agency's CCTV network is likely to increase significantly over the next few years.\r\nThe London congestion charge is enforced by cameras positioned at the boundaries of and inside the congestion charge zone, which automatically read the licence plates of cars. If the driver does not pay the charge then a fine will be imposed. Similar systems are being developed as a means of locating cars reported stolen.\r\nOther surveillance cameras serve as traffic enforcement cameras.\r\n<span style=\"font-weight: bold; \">Transport safety</span>\r\nA CCTV system may be installed where any example, on a Driver-only operated train CCTV cameras may allow the driver to confirm that people are clear of doors before closing them and starting the train.\r\n<span style=\"font-weight: bold; \">Sporting events</span>\r\nMany sporting events in the United States use CCTV inside the venue for fans to see the action while they are away from their seats. The cameras send the feed to a central control center where a producer selects feeds to send to the television monitors that fans can view. CCTV monitors for viewing the event by attendees are often placed in lounges, hallways, and restrooms. This use of CCTV is not used for surveillance purposes.\r\n<span style=\"font-weight: bold; \">Monitor employees</span>\r\nOrganizations use CCTV to monitor the actions of workers. Every action is recorded as an information block with subtitles that explain the performed operation. This helps to track the actions of workers, especially when they are making critical financial transactions, such as correcting or cancelling of a sale, withdrawing money or altering personal information.\r\nActions which an employer may wish to monitor could include:\r\n<ul><li>Scanning of goods, selection of goods, introduction of price and quantity;</li><li>Input and output of operators in the system when entering passwords;</li><li>Deleting operations and modifying existing documents;</li><li>Implementation of certain operations, such as financial statements or operations with cash;</li><li>Moving goods, revaluation scrapping and counting;</li><li>Control in the kitchen of fast food restaurants;</li><li>Change of settings, reports and other official functions.</li></ul>\r\nEach of these operations is transmitted with a description, allowing detailed monitoring of all actions of the operator. Some systems allow the user to search for a specific event by time of occurrence and text description, and perform statistical evaluation of operator behaviour. This allows the software to predict deviations from the standard workflow and record only anomalous behaviour.\r\n<span style=\"font-weight: bold; \">Use in schools</span>\r\nIn the United States, Britain, Australia and New Zealand, CCTV is widely used in schools due to its success in preventing bullying, vandalism, monitoring visitors and maintaining a record of evidence in the event of a crime. There are some restrictions on installation, with cameras not being installed in an area where there is a "reasonable expectation of privacy", such as bathrooms, gym locker areas and private offices (unless consent by the office occupant is given). Сameras are generally acceptable in hallways, parking lots, front offices where students, employees, and parents come and go, gymnasiums, cafeterias, supply rooms and classrooms. The installation of cameras in classrooms may be objected to by some teachers.\r\n<span style=\"font-weight: bold; \">Criminal use</span>\r\nCriminals may use surveillance cameras to monitor the public. For example, a hidden camera at an ATM can capture people's PINs as they are entered, without their knowledge. The devices are small enough not to be noticed, and are placed where they can monitor the keypad of the machine as people enter their PINs. Images may be transmitted wirelessly to the criminal. Even lawful surveillance cameras sometimes have their data go into the hands of people who have no legal right to receive it.\r\n\r\n<span style=\"text-decoration: underline; \"><span style=\"font-weight: bold; \">Technological developments</span></span>\r\n<span style=\"font-weight: bold; \">Computer-controlled analytics and identification</span>\r\nComputer-controlled cameras can identify, track, and categorize objects in their field of view.\r\n<span style=\"font-weight: bold; \">Video content analysis (VCA)</span> is the capability of automatically analyzing video to detect and determine temporal events not based on a single image, but rather object classification. As such, it can be seen as the automated equivalent of the biological visual cortex.\r\nA system using VCA can recognize changes in the environment and even identify and compare objects in the database using size, speed, and sometimes colour. The camera's actions can be programmed based on what it is "seeing". For example; an alarm can be issued if an object has moved in a certain area, or if a painting is missing from a wall, or if a smoke or fire is detected, or if running people are detected, or if fallen people are detected and if someone has spray painted the lens, as well as video loss, lens cover, defocus and other so called camera tampering events.\r\nVCA analytics can also be used to detect unusual patterns in an environment. The system can be set to detect anomalies in a crowd, for instance a person moving in the opposite direction in airports where passengers are supposed to walk only in one direction out of a plane or in a subway where people are not supposed to exit through the entrances.\r\nVCA can track people on a map by calculating their position from the images. It is then possible to link many cameras and track a person through an entire building or area. This can allow a person to be followed without having to analyze many hours of film. Currently the cameras have difficulty identifying individuals from video alone, but if connected to a key-card system, identities can be established and displayed as a tag over their heads on the video.\r\nThere is also a significant difference in where the VCA technology is placed, either the data is being processed within the cameras (on the edge) or by a centralized server. Both technologies have their pros and cons.\r\nA <span style=\"font-weight: bold; \">facial recognition system</span> is a computer application for automatically identifying or verifying a person from a digital image or a video frame from a video source. One of the ways to do this is by comparing selected facial features from the image and a facial database.\r\nThe combination of CCTV and facial recognition has been tried as a form of mass surveillance, but has been ineffective because of the low discriminating power of facial recognition technology and the very high number of false positives generated. This type of system has been proposed to compare faces at airports and seaports with those of suspected terrorists or other undesirable entrants.[citation needed] Computerized monitoring of CCTV images is under development, so that a human CCTV operator does not have to endlessly look at all the screens, allowing an operator to observe many more CCTV cameras.[citation needed] These systems do not observe people directly. Insta Types of body-movement behavior, or particular types of clothing or baggage.\r\nTo many, the development of CCTV in public areas, linked to computer databases of people's pictures and identity, presents a serious breach of civil liberties. Conservative critics fear the possibility that one would no longer have anonymity in public places. Demonstrations or assemblies in public places could be affected as the state would be able to collate lists of those leading them, taking part, or even just talking with protesters in the street.\r\nComparatively harmless are people counter systems. They use CCTV equipment as front end eyes of devices which perform shape recognition technology in order to identify objects as human beings and count people passing pre-defined areas.\r\n<span style=\"font-weight: bold; \">Retention, storage and preservation</span>\r\nMost CCTV systems may record and store digital video and images to a digital video recorder (DVR) or, in the case of IP cameras, directly to a server, either on-site or offsite.\r\nThere is a cost in the retention of the images produced by CCTV systems. The amount and quality of data stored on storage media is subject to compression ratios, images stored per second, image size and is effected by the retention period of the videos or images. DVRs store images in a variety of proprietary file formats. Recordings may be retained for a preset amount of time and then automatically archived, overwritten or deleted, the period being determined by the organisation that generated them.\r\nClosed-circuit digital photography (CCDP)\r\nClosed-circuit digital photography (CCDP) is more suited for capturing and saving recorded high-resolution photographs, whereas closed-circuit television (CCTV) is more suitable for live-monitoring purposes.\r\nHowever, an important feature of some CCTV systems is the ability to take high resolution images of the camera scene, e.g. on a time lapse or motion-detection basis. Images taken with a digital still camera often have higher resolution than those taken with some video cameras. Increasingly, low-cost high-resolution digital still cameras can also be used for CCTV purposes.\r\nImages may be monitored remotely when the computer is connected to a network.\r\n<span style=\"font-weight: bold; \">IP cameras</span>\r\nA growing branch in CCTV is internet protocol cameras (IP cameras). It is estimated that 2014 was the first year that IP cameras outsold analog cameras. IP cameras use the Internet Protocol (IP) used by most Local Area Networks (LANs) to transmit video across data networks in digital form. IP can optionally be transmitted across the public internet, allowing users to view their cameras through any internet connection available through a computer or a phone, this is considered remote access. For professional or public infrastructure security applications, IP video is restricted to within a private network or VPN, or can be recorded onto a remote server.\r\n<span style=\"font-weight: bold; \">Networking CCTV cameras</span>\r\nThe city of Chicago operates a networked video surveillance system which combines CCTV video feeds of government agencies with those of the private sector, installed in city buses, businesses, public schools, subway stations, housing projects etc. Even homeowners are able to contribute footage. It is estimated to incorporate the video feeds of a total of 15,000 cameras.\r\nThe system is used by Chicago's Office of Emergency Management in case of an emergency call: it detects the caller's location and instantly displays the real-time video feed of the nearest security camera to the operator, not requiring any user intervention. While the system is far too vast to allow complete real-time monitoring, it stores the video data for later usage in order to provide possible evidence in criminal cases.\r\nNew York City has a similar network called the Domain Awareness System.\r\nLondon also has a network of CCTV systems that allows multiple authorities to view and control CCTV cameras in real time. The system allows authorities including the Metropolitan Police Service, Transport for London and a number of London boroughs to share CCTV images between them. It uses a network protocol called Television Network Protocol to allow access to many more cameras than each individual system owner could afford to run and maintain.\r\nThe Glynn County Police Department uses a wireless mesh-networked system of portable battery-powered tripods for live megapixel video surveillance and central monitoring of tactical police situations. The systems can be used either on a stand-alone basis with secure communications to nearby police laptops, or within a larger mesh system with multiple tripods feeding video back to the command vehicle via wireless, and to police headquarters via 3G.\r\n<span style=\"font-weight: bold; \">Integrated systems</span>\r\nIntegrated systems allow different security systems, like CCTV, access control, intruder alarms and intercoms to operate together. For example, when an intruder alarm is activated, CCTV cameras covering the intrusion area are recorded at a higher frame rate and transmitted to an Alarm Receiving Centre.\r\n<span style=\"font-weight: bold; \">Wireless security cameras</span>\r\nMany consumers are turning to wireless security cameras for home surveillance. Wireless cameras do not require a video cable for video/audio transmission, simply a cable for power. Wireless cameras are also easy and inexpensive to install, but lack the reliability of hard-wired cameras. Previous generations of wireless security cameras relied on analog technology; modern wireless cameras use digital technology which delivers crisper audio, sharper video, and a secure and interference-free signal.\r\n<span style=\"font-weight: bold;\">Talking CCTV</span>\r\nIn Wiltshire, UK, 2003, a pilot scheme for what is now known as "Talking CCTV" was put into action; allowing operators of CCTV cameras to order offenders to stop what they were doing, ranging from ordering subjects to pick up their rubbish and put it in a bin to ordering groups of vandals to disperse. In 2005, Ray Mallon, the mayor and former senior police officer of Middlesbrough implemented "Talking CCTV" in his area.\r\nOther towns have had such cameras installed. In 2007 several of the devices were installed in Bridlington town centre, East Riding of Yorkshire.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_CCTV.png","alias":"cctv-closed-circuit-television"},"51":{"id":51,"title":"PaaS - Platform as a service","description":"<span style=\"font-weight: bold; \">Platform as a Service (PaaS)</span> or <span style=\"font-weight: bold; \">Application Platform as a Service (aPaaS)</span> or <span style=\"font-weight: bold; \">platform-based service</span> is a category of cloud computing services that provides a platform allowing customers to develop, run, and manage applications without the complexity of building and maintaining the infrastructure typically associated with developing and launching an app.\r\n<span style=\"font-weight: bold; \">PaaS can be delivered in three ways:</span>\r\n<span style=\"font-weight: bold;\">As a public cloud service</span> from a provider, where the consumer controls software deployment with minimal configuration options, and the provider provides the networks, servers, storage, operating system (OS), middleware (e.g. Java runtime, .NET runtime, integration, etc.), database and other services to host the consumer's application.\r\n<span style=\"font-weight: bold;\">As a private service</span> (software or appliance) behind a firewall.\r\n<span style=\"font-weight: bold;\">As software</span> deployed on a public infrastructure as a service.\r\n<span style=\"color: rgb(97, 97, 97); \">The original intent of PaaS technology was to simplify the code-writing process for developers, with the infrastructure and operations handled by the PaaS provider. Originally, all PaaSes were in the public cloud. Because many companies did not want to have everything in the public cloud, private and hybrid PaaS options (managed by internal IT departments) were created.</span>\r\n<span style=\"color: rgb(97, 97, 97); \">PaaS provides an environment for developers and companies to create, host and deploy applications, saving developers from the complexities of the infrastructure side (setting up, configuring and managing elements such as servers and databases).</span>\r\n<span style=\"color: rgb(97, 97, 97); \">PaaS products can improve the speed of developing an app, and allow the consumer to focus on the application itself. With PaaS, the consumer manages applications and data, while the provider (in public PaaS) or IT department (in private PaaS) manages runtime, middleware, operating system, virtualization, servers, storage and networking.</span>\r\n<span style=\"color: rgb(97, 97, 97); \">PaaS offerings may also include facilities for application design, application development, testing and deployment, as well as services such as team collaboration, web service integration, and marshalling, database integration, security, scalability, storage, persistence, state management, application versioning, application instrumentation, and developer community facilitation. Besides the service engineering aspects, PaaS solutions include mechanisms for service management, such as monitoring, workflow management, discovery and reservation.</span>\r\nThere are various types of PaaS providers. All offer application hosting and a deployment environment, along with various integrated services. Services offer varying levels of scalability and maintenance. Developers can write an application and upload it to a PaaS platform that supports their software language of choice, and the application runs on that PaaS.","materialsDescription":"<h1 class=\"align-center\">How PaaS works</h1>\r\n<p class=\"align-left\">PaaS does not replace a company's entire IT infrastructure for software development. It is provided through a cloud service provider's hosted infrastructure with users most frequently accessing the offerings through a web browser. PaaS can be delivered through public, private and hybrid clouds to deliver services such as application hosting and Java development.</p>\r\n<p class=\"align-left\"><span style=\"font-weight: bold; \">Other PaaS services include:</span></p>\r\n<ul><li>Development team collaboration</li><li>Application design and development</li><li>Application testing and deployment</li><li>Web service integration</li><li>Information security</li><li>Database integration</li></ul>\r\n<p class=\"align-left\">Users pay for PaaS on a per-use basis. However, different platform as a service providers charge a flat monthly fee for access to the platform and its applications.</p>\r\n<h1 class=\"align-center\">What are the types of PaaS?</h1>\r\n<ul><li><span style=\"font-weight: bold; \">Public PaaS</span></li></ul>\r\nA public PaaS allows the user to control software deployment while the cloud provider manages the delivery of all other major IT components necessary to the hosting of applications, including operating systems, databases, servers and storage system networks. \r\nPublic PaaS vendors offer middleware that enables developers to set up, configure and control servers and databases without the necessity of setting up the infrastructure side of things. As a result, public PaaS and IaaS (infrastructure as a service) run together, with PaaS operating on top of a vendor's IaaS infrastructure while leveraging the public cloud. \r\n<ul><li><span style=\"font-weight: bold; \">Private PaaS</span></li></ul>\r\nA private PaaS is usually delivered as an appliance or software within the user's firewall which is frequently maintained in the company's on-premises data center. A private PaaS software can be developed on any type of infrastructure and can work within the company's specific private cloud. Private PaaS allows an organization to better serve developers, improve the use of internal resources and reduce the costly cloud sprawl that many companies face.\r\n<ul><li><span style=\"font-weight: bold; \">Hybrid PaaS </span></li></ul>\r\nCombines public PaaS and private PaaS to provide companies with the flexibility of infinite capacity provided by a public PaaS model and the cost efficiencies of owning an internal infrastructure in private PaaS. Hybrid PaaS utilizes a hybrid cloud.\r\n<ul><li><span style=\"font-weight: bold; \">Communication PaaS </span></li></ul>\r\nCPaaS is a cloud-based platform that allows developers to add real-time communications to their apps without the need for back-end infrastructure and interfaces. Normally, real-time communications occur in apps that are built specifically for these functions. Examples include Skype, FaceTime, WhatsApp and the traditional phone. CPaaS provides a complete development framework for the creation of real-time communications features without the necessity of a developer building their own framework.\r\n<ul><li><span style=\"font-weight: bold; \">Mobile PaaS</span> </li></ul>\r\nMPaaS is the use of a paid integrated development environment for the configuration of mobile apps. In an mPaaS, coding skills are not required. MPaaS is delivered through a web browser and typically supports public cloud, private cloud and on-premises storage. The service is usually leased with pricing per month, varying according to the number of included devices and supported features.\r\n<ul><li><span style=\"font-weight: bold; \">Open PaaS</span></li></ul>\r\nIt is a free, open source, business-oriented collaboration platform that is attractive on all devices and provides useful web apps, including calendar, contacts and mail applications. OpenPaaS was designed to allow users to quickly deploy new applications with the goal of developing a PaaS technology that is committed to enterprise collaborative applications, specifically those deployed on hybrid clouds.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/PaaS_-_Platform_as_a_service.png","alias":"paas-platform-as-a-service"},"54":{"id":54,"title":"MDM - master data management","description":"<span style=\"font-weight: bold; \">Master data management (MDM)</span> is the core process used to manage, centralize, organize, categorize, localize, synchronize and enrich master data according to the business rules of the sales, marketing and operational strategies of your company. \r\nIt is a technology-enabled discipline in which business and IT work together to ensure the uniformity, accuracy, stewardship, semantic consistency and accountability of the enterprise’s official shared master data assets. Master data is the consistent and uniform set of identifiers and extended attributes that describes the core entities of the enterprise including customers, prospects, citizens, suppliers, sites, hierarchies and chart of accounts.\r\n<p class=\"align-center\"><span style=\"font-weight: bold; \">Essential Master Data Management Capabilities</span></p>\r\n<span style=\"font-weight: bold; \">Flexible and multi-domain.</span>An extensible master data repository with flexible data modeling features provides a centralized view of all relationships between data types, clarifying complex cross-domain relationships, providing a flexible and multi-domain master data software.\r\n<span style=\"font-weight: bold; \">Multi-style MDM.</span> Master Data Management platform should support all four main styles of MDM:\r\n<span style=\"font-style: italic;\">Centrally authored:</span> In this style data is authored in the MDM, other systems subscribe to the MDM for master data (or the MDM pushes the data into downstream applications).\r\n<span style=\"font-style: italic;\">Consolidation:</span> Source systems feed data into the MDM for consolidation into golden records\r\n<span style=\"font-style: italic;\">Coexistence:</span> A mashup of centrally authored and consolidation that allows for creation of data in multiple systems.\r\n<span style=\"font-style: italic;\">Registry:</span> Rather than consolidating records, joining/aligning unique identifiers from across all the systems into join tables.\r\n<span style=\"font-weight: bold; \">Real-time, secure data.</span> The top MDM software today allow you to publish and subscribe to data on demand, providing accurate master data to systems when and how you need it without compromising security. With real-time data, users can better react to the data and make faster decisions based on the insights discovered.\r\n<span style=\"font-weight: bold; \">Data and Workflow visualization.</span> Master Data Management software provides a data visualization component that allows you to identify and easily fix quality issues. The capability can also helps users collaborate to constantly make improvements, monitor processes, and create dashboards for actionable data analysis.\r\n<span style=\"font-weight: bold; \">A customizable, business-friendly user interface.</span> A zero coding visual design time environment allows you to develop custom UIs using simple drag and drop actions. You can design cleaner, simpler, and more flexible role-based user interfaces for your Master Database Management system.\r\n\r\n","materialsDescription":"<h1 class=\"align-center\">Things to Look for in MDM Management Software</h1>\r\n<p class=\"align-left\">Because MDM is such a major task, you need the right software solution to assist you. The good news is that you have plenty of selections to choose from. The hard part is deciding on one. Here are a handful of features to look for:</p>\r\n<span style=\"font-weight: bold;\">Flexibility.</span> MDM isn’t a static issue. MDM software vendors continually updating products, so solutions will change rather dramatically over the course of a few months or years. With that being said, it’s smart to look at flexibility when it comes to master data management tools list. You may have a very specific need now, but will your solution allow you to address a future need that looks considerably different? \r\n<span style=\"font-weight: bold;\">Modeling.</span> Can you leverage the data model(s) of the member applications and eliminate the need to model? It could save you time, money, and help your master data be readily consumed without requiring additional transformation from an abstract data model to the data model in the member application(s). \r\n<span style=\"font-weight: bold;\">Cost.</span> While it shouldn’t be the only factor, money is obviously something that must be considered in the context of budgeting. This may be one of the first factors you use to narrow your choices. If you know you can only spend X dollars, then there’s no point in evaluating selections that cost more.\r\n <span style=\"font-weight: bold;\">Scalability.</span> How well does the solution scale? Your business is a fluid entity that will grow, contract, stagnate, grow again, etc. There’s no point in investing in thebest master data management tools that can only be used at your current size. Find one that easily grows and contracts in a cost-effective manner.\r\n <span style=\"font-weight: bold;\">Integration.</span> The final thing to think about is integration. Since the point of MDM software is to create a centralized destination for data, you need to carefully ensure that it will work with your current setup.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/MDM_-_master_data_management1.png","alias":"mdm-master-data-management"},"55":{"id":55,"title":"Structuring Cabling","description":"In telecommunications, structured cabling is building or campus cabling infrastructure that consists of a number of standardized smaller elements (hence structured) called subsystems.\r\nStructured cabling is the design and installation of a cabling system that will support multiple hardware uses and be suitable for today's needs and those of the future. With a correctly installed system, current and future requirements can be met, and hardware that is added in the future will be supported.\r\nStructured cabling design and installation is governed by a set of standards that specify wiring data centers, offices, and apartment buildings for data or voice communications using various kinds of cable, most commonly category 5e (Cat 5e), category 6 (Cat 6), and fiber optic cabling and modular connectors. These standards define how to lay the cabling in various topologies in order to meet the needs of the customer, typically using a central patch panel (which is normally 19-inch rack-mounted), from where each modular connection can be used as needed. Each outlet is then patched into a network switch (normally also rack-mounted) for network use or into an IP or PBX (private branch exchange) telephone system patch panel.\r\nLines patched as data ports into a network switch require simple straight-through patch cables at each end to connect a computer. Voice patches to PBXs in most countries require an adapter at the remote end to translate the configuration on 8P8C modular connectors into the local standard telephone wall socket. No adapter is needed in North America as the 6P2C and 6P4C plugs most commonly used with RJ11 and RJ14 telephone connections are physically and electrically compatible with the larger 8P8C socket. RJ25 and RJ61 connections are physically but not electrically compatible, and cannot be used. In the United Kingdom, an adapter must be present at the remote end as the 6-pin BT socket is physically incompatible with 8P8C.\r\nIt is common to color-code patch panel cables to identify the type of connection, though structured cabling standards do not require it except in the demarcation wall field.\r\nCabling standards require that all eight conductors in Cat 5e/6/6A cable be connected.\r\nIP phone systems can run the telephone and the computer on the same wires, eliminating the need for separate phone wiring.\r\nRegardless of copper cable type (Cat 5e/6/6A), the maximum distance is 90 m for the permanent link installation, plus an allowance for a combined 10 m of patch cords at the ends.\r\nCat 5e and Cat 6 can both effectively run power over Ethernet (PoE) applications up to 90 m. However, due to greater power dissipation in Cat 5e cable, performance and power efficiency are higher when Cat 6A cabling is used to power and connect to PoE devices.","materialsDescription":" <span style=\"font-weight: bold;\">What is structured cabling?</span>\r\nStructured cabling is the highway that information travels on in a building. The building can be large or small, commercial or residential, or a combination of both as in the mixed-use retail, commercial, and residential buildings now found in most large cities. Structured cabling systems are designed around telecommunications code standards to ensure that computer equipment will operate as designed when connected to the structured cabling system. Some of these factors include distance limitations, cable types, flammability ratings, and bend radii.\r\n<span style=\"font-weight: bold;\">Cat5/Cat6 what’s the difference?</span>\r\nThe general difference between Cat5e cabling and Cat6 cabling is in the transmission performance and extension of the available bandwidth from 100 MHz for category 5e to 250 MHz for category 6. This includes better insertion loss, near-end crosstalk (NEXT), return loss, and equal level far-end crosstalk (ELFEXT). These improvements provide a higher signal to noise ratio, allowing higher reliability for current applications and higher data rates for future applications.\r\n<span style=\"font-weight: bold;\">Do I need Plenum or PVC?</span>\r\nPlenum cable is designed to operate in a “return air” space in the building. Typically these spaces are above a suspended ceiling or beneath a raised floor. They are said to be a “return air” space because that is where the HVAC system gets the air to the heat or cool. If ever in question, the building inspector is typically the AHJ (authority having jurisdiction). Plenum cable is more expensive than PVC because of the less flammable compounds used in production. A plenum cable must pass a burn test that measures flame spread and smoke emissivity when exposed to the flame of a certain intensity and duration.\r\n<span style=\"font-weight: bold;\">Do I need 1 or 2 cables per work area?</span>\r\nThis decision is a commonly debated topic. The fact is that the cable is very inexpensive relative to the entire telecommunications system and the building that it serves. The increased functionality and bandwidth that one additional data cable can provide at each work area outlet can prove to be priceless, especially after the drywall is in place.\r\n<span style=\"font-weight: bold;\">Do I need a cabinet, can’t I just plug straight into my equipment?</span>\r\nA cabinet is always recommended even for the smallest installs. Cabling plugged directly into equipment has a tendency to break away at the termination ends as the solid cable is not suitable for direct termination. Also, a cabinet provides protection for the equipment from theft, breakage, dust, and employees. Cabinets also allow all the equipment to be stored together and in a manageable way for moves and changes.\r\n<span style=\"font-weight: bold;\">Why do I need such a big cabinet?</span>\r\nThe cabinet should be large enough to house the current equipment with some space for possible future requirements. I.e. for a VOIP telephone system to be housed. The depth of the cabinet should keep in mind what is to be stored in the cabinet. Some ISP switches and blade servers are extra deep and required an 800/1000mm deep cabinet.\r\n<span style=\"font-weight: bold;\">What do Data cable installation test results show?</span>\r\nTest Results show a range of tests depending on the grade of cabling used (Cat5e/Cat6 etc.). These tests for Cat6 include Wire Map, Length, Insertion Loss, NEXT Loss, PS NEXT Loss, ACRF Loss, PS ACRF Loss, Return Loss, Propagation Delay, and Delay Skew. These are tests to ensure installation standards have been met, the terminations have been done correctly and that the cable doesn’t have any unnecessary bends, kinks, and twists.\r\n<span style=\"font-weight: bold;\">What should the end deliverable be for a structured cabling system?</span>\r\nWhen properly designed and installed, the end deliverable should be a structured cabling system that supports the customer’s needs now and well into the foreseeable future. The Main Distribution Frames and Intermediate Distribution Frames should be well thought out, and cables should be neatly dressed. It should have additional cable runs that support a wireless overlay and have sufficient bandwidth in the backbone to handle a step-change in bandwidth needs. For the last 20 years, clients have utilized more bandwidth in the current year than the year preceding it. Nobody ever says “we put in too much cable.”","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/Structuring_Cabling.png","alias":"structuring-cabling"},"172":{"id":172,"title":"WLAN - wireless network","description":"Unified Communications (UC) is a marketing buzzword describing the integration of real-time, enterprise, communication services such as instant messaging (chat), presence information, voice (including IP telephony), mobility features (including extension mobility and single number reach), audio, web & video conferencing, fixed-mobile convergence (FMC), desktop sharing, data sharing (including web connected electronic interactive whiteboards), call control and speech recognition with non-real-time communication services such as unified messaging (integrated voicemail, e-mail, SMS and fax). UC is not necessarily a single product, but a set of products that provides a consistent unified user-interface and user-experience across multiple devices and media-types.\r\n\r\nIn its broadest sense, UC can encompass all forms of communications that are exchanged via a network to include other forms of communications such as Internet Protocol Television (IPTV) and digital signage Communications as they become an integrated part of the network communications deployment and may be directed as one-to-one communications or broadcast communications from one to many.\r\n\r\nUC allows an individual to send a message on one medium, and receive the same communication on another medium. For example, one can receive a voicemail message and choose to access it through e-mail or a cell phone. If the sender is online according to the presence information and currently accepts calls, the response can be sent immediately through text chat or video call. Otherwise, it may be sent as a non-real-time message that can be accessed through a variety of media.\r\n\r\nSource: https://en.wikipedia.org/wiki/Unified_communications","materialsDescription":"","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/WLAN_-_wireless_network.png","alias":"wlan-wireless-network"},"178":{"id":178,"title":"IoT - Internet of Things","description":"The Internet of things (IoT) is the extension of Internet connectivity into physical devices and everyday objects. Embedded with electronics, Internet connectivity, and other forms of hardware (such as sensors), these devices can communicate and interact with others over the Internet, and they can be remotely monitored and controlled.\r\nThe definition of the Internet of things has evolved due to the convergence of multiple technologies, real-time analytics, machine learning, commodity sensors, and embedded systems. Traditional fields of embedded systems, wireless sensor networks, control systems, automation (including home and building automation). and others all contribute to enabling the Internet of things. In the consumer market, IoT technology is most synonymous with products pertaining to the concept of the "smart home", covering devices and appliances (such as lighting fixtures, thermostats, home security systems and cameras, and other home appliances) that support one or more common ecosystems, and can be controlled via devices associated with that ecosystem, such as smartphones and smart speakers.\r\nThe IoT concept has faced prominent criticism, especially in regards to privacy and security concerns related to these devices and their intention of pervasive presence.","materialsDescription":"<span style=\"font-weight: bold;\">What is the Internet of Things (IoT)?</span>\r\nThe Internet of things refers to the network of things (physical objects) that can be connected to the Internet to collect and share data without human-to-human or human-to-computer interaction.\r\n<span style=\"font-weight: bold;\">Why is it called the Internet of Things?</span>\r\nThe term Internet of things was coined by Kevin Ashton in 1999. Stemming from Kevin Ashton’s experience with RFID, the term Internet of things originally described the concept of tagging every object in a person’s life with machine-readable codes. This would allow computers to easily manage and inventory all of these things.\r\nThe term IoT today has evolved to a much broader prospect. It now encompasses ubiquitous connectivity, devices, sensors, analytics, machine learning, and many other technologies.\r\n<span style=\"font-weight: bold;\">What is an IoT solution?</span>\r\nAn IoT solution is a combination of devices or other data sources, outfitted with sensors and Internet connected hardware to securely report information back to an IoT platform. This information is often a physical metric which can help users answer a question or solve a specific problem.\r\n<span style=\"font-weight: bold;\">What is an IoT Proof of Concept (PoC)?</span>\r\nThe purpose of a PoC is to experiment with a solution in your environment, collect data, and evaluate performance from a set timeline on a set budget. A PoC is a low-risk way to introduce IoT to an organization.\r\n<span style=\"font-weight: bold;\">What is an IoT cloud platform?</span>\r\nAn IoT platform provides users with one or more of these key elements — visualization tools, data security features, a workflow engine and a custom user interface to utilize the information collected from devices and other data sources in the field. These platforms are based in the cloud and can be accessed from anywhere.\r\n<span style=\"font-weight: bold;\">What is industrial equipment monitoring?</span>\r\nIndustrial equipment monitoring uses a network of connected sensors - either native to a piece of equipment or retrofitted - to inform owners/operators of a machine’s output, component conditions, need for service or impending failure. Industrial equipment monitoring is an IoT solution which can utilize an IoT platform to unify disparate data and enable decision-makers to respond to real-time data.<br /><br />","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/IoT_-_Internet_of_Things.png","alias":"iot-internet-of-things"},"217":{"id":217,"title":"Business-to-Business Middleware","description":" Middleware is a very broad term that can be defined as a translation layer between different applications and encompasses a number of different technologies, such as message-oriented middleware and database middleware. B2B middleware though has a narrower definition and is concerned first and foremost with routing data from a firm’s business applications to the applications of business partners such as customer, suppliers or banks.\r\nData must be extracted from the source system, which might be an ERP system or securities trading platform or an HR system whether it is an installed system or, as is increasingly the case, a cloud-based system. Data can be extracted using an API or specialized middleware supplied by the enterprise application.\r\nOnce the data has been extracted, it must be correctly formatted so that it can be shared by a completely different system. Typical standard formats are EDI or XML. However, each of these formats has specific variants specific to particular vertical industries. When the data has been formatted, it must then be transmitted to the business partner and, once again, there are a number of different network protocols such as HTTP-baaed AS1 and AS2, or FTP to support B2B integration.","materialsDescription":" <span style=\"font-weight: bold; \">What is Middleware?</span>\r\nMiddleware is the software that connects network-based requests generated by a client to the back-end data the client is requesting. It is a general term for software that serves to "glue together" separate, often complex and already existing programs.\r\nMiddleware programs come in on-premises software and cloud services, and they can be used independently or together, depending upon the use case. While cloud providers bundle middleware into cloud services suites, such as middleware as a service (MWaaS) or integration PaaS (iPaaS), industry researchers note that many businesses still choose independent middleware products that fit their specific needs.\r\n<span style=\"font-weight: bold; \">How middleware works</span>\r\nAll network-based requests are essentially attempts to interact with back-end data. That data might be something as simple as an image to display or a video to play, or it could be as complex as a history of banking transactions.\r\nThe requested data can take on many different forms and may be stored in a variety of ways, such as coming from a file server, fetched from a message queue or persisted in a database. The role of middleware is to enable and ease access to those back-end resources.\r\n<span style=\"font-weight: bold;\">Middleware categories</span>\r\nIn general, IT industry analysts -- such as Gartner Inc. and Forrester Research -- put middleware into two categories: enterprise integration middleware and platform middleware.\r\n<ul><li>Enterprise application integration middleware enables programmers to create business applications without having to custom-craft integrations for each new application. Here, middleware helps software and services components work together, providing a layer of functionality for data consistency and multi-enterprise or B2B integration. Typically, integration middleware provides messaging services, so different applications can communicate using messaging frameworks like Simple Object Access Protocol (SOAP), web services, Representational State Transfer (REST) or JavaScript Object Notation (JSON). Other middleware technologies used in this category include Object Request Brokers (ORBs), data representation technologies like XML and JavaScript Object Notation (JSON), and more.</li></ul>\r\nBusinesses can purchase individual middleware products or on-premises or cloud-based application integration suites.\r\n<ul><li>Platform middleware supports software development and delivery by providing a runtime hosting environment, such as a container, for application program logic. Its primary components are in-memory and enterprise application servers, as well as web servers and content management. Middleware includes web servers, application servers, content management systems and similar tools that support application development and delivery. Generally, embedded or external communications middleware allows different communications tools to work together. These communications tools enable application and service interaction. Resource management services, such as Microsoft Azure Resource Manager, host application program logic at runtime, another key function in platform middleware. Other components include Trusted Platform Modules (TPMs) and in-memory data grids (IMDGs).</li></ul>\r\nPlatform middleware products are also available as specific on-premises or cloud service tools, as well as multitool suites. On a cloud suite site, middleware as a service offers an integrated set of platform tools and the runtime environment.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/Business-to-Business_Middleware.png","alias":"business-to-business-middleware"},"219":{"id":219,"title":"Event-Driven Middleware","description":" Event management software is the generic term for a wide range of software products that are used in the management of professional and academic conferences, trade exhibitions, conventions and smaller events such as Continuing Professional Development (CPD) meetings.\r\nThe most common event management applications are:\r\n<ul><li>Event schedule planning</li><li>Customized event website creation</li><li>Online event registration, ticketing and delegate management including online payment</li><li>Event budgeting</li><li>Lead retrieval</li><li>Venue selection</li><li>Event Marketing</li><li>Event Networking for attendee engagement</li><li>Procurement, sourcing and RFPs</li><li>Content management including abstract and/or paper management, reviewing, programme development and publishing</li><li>Exhibition management including floor planning, booking and billing</li><li>On-site operations including registration, badges and networking</li><li>Audience response solutions, live slide sharing and second screen tools as live polls, Q+A, etc.</li></ul>","materialsDescription":" <span style=\"font-weight: bold;\">What is the event-driven architecture?</span>\r\nThe event-driven architecture is a software architecture and model for application design. With an event-driven system, the capture, communication, processing, and persistence of events are the core structure of the solution. This differs from a traditional request-driven model.\r\nAn event is any significant occurrence or change in state for system hardware or software. An event is not the same as an event notification, which is a message or notification sent by the system to notify another part of the system that an event has taken place. \r\nThe source of an event can be from internal or external inputs. Events can generate from a user, like a mouse click or keystroke, an external source, such as a sensor output, or come from the system, like loading a program.\r\nMany modern application designs are event-driven. Event-driven apps can be created in any programming language because event-driven is a programming approach, not a language. The event-driven architecture enables minimal coupling, which makes it a good option for modern, distributed application architectures.\r\nAn event-driven architecture is loosely coupled because event producers don’t know which event consumers are listening for an event, and the event doesn’t know what the consequences are of its occurrence.\r\n<span style=\"font-weight: bold;\">How does event-driven architecture work?</span>\r\nThe event-driven architecture is made up of event producers and event consumers. An event producer detects or senses an event and represents the event as a message. It does not know the consumer of the event or the outcome of an event.\r\nAfter an event has been detected, it is transmitted from the event producer to the event consumers through event channels, where an event processing platform processes the event asynchronously. Event consumers need to be informed when an event has occurred. They might process the event or may only be impacted by it.\r\nThe event processing platform will execute the correct response to an event and send the activity downstream to the right consumers. This downstream activity is where the outcome of an event is seen.\r\n<span style=\"font-weight: bold;\">What are the benefits of event-driven architecture?</span>\r\nAn event-driven architecture can help organizations achieve a flexible system that can adapt to changes and make decisions in real-time. Real-time situational awareness means that business decisions, whether manual or automated, can be made using all of the available data that reflects the current state of your systems.\r\nEvents are captured as they occur from event sources such as Internet of Things (IoT) devices, applications, and networks, allowing event producers and event consumers to share status and response information in real-time.\r\nOrganizations can add event-driven architecture to their systems and applications to improve the scalability and responsiveness of applications and access to the data and context needed for better business decisions.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/Event-Driven_Middleware.png","alias":"event-driven-middleware"},"221":{"id":221,"title":"Process Automation Middleware","description":" At the current level of development, process automation is one of the approaches to process management based on the use of information technology. This approach allows the management of operations, data, information and resources through the use of computers and software that reduce the degree of human participation in the process, or completely eliminate it.\r\nThe main goal of automation is to improve the quality of the process. An automated process has more stable characteristics than a manual process. In many cases, process automation can increase productivity, reduce process execution time, reduce cost, increase accuracy and stability of operations.\r\nTo date, process automation has covered many industries and areas of activity: from manufacturing processes to shopping in stores. Regardless of the size and scope of the organization, almost every company has automated processes. The process approach provides for all processes the same principles of automation.\r\nDespite the fact that process automation can be performed at various levels, the principles of automation for all levels and all types of processes will remain the same. These are general principles that set the conditions for the efficient execution of processes in automatic mode and establish rules for automatic process control.\r\nThe basic principles of process automation are: the principle of consistency, the principle of integration, the principle of independence of execution. These general principles are detailed depending on the level of automation under consideration and specific processes. For example, automation of production processes includes principles such as the principle of specialization, the principle of proportionality, the principle of continuity, etc.","materialsDescription":" <span style=\"font-weight: bold; \">What are the levels of process automation?</span>\r\nProcess automation is needed to support management at all levels of the company hierarchy. In this regard, the levels of automation are determined depending on the level of control at which the automation of processes is performed.\r\nManagement levels are usually divided into operational, tactical and strategic.\r\nIn accordance with these levels, automation levels are also distinguished:\r\n<ul><li><span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">Lower level of automation or level of performers.</span></span> At this level, automation of regularly running processes is carried out. Automation of processes is aimed at performing operational tasks (for example, executing a production process), maintaining established parameters (for example, autopilot operation), and maintaining certain operating modes (for example, temperature conditions in a gas boiler).</li><li><span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">Production management level or tactical level.</span></span> Automation of processes of this level ensures the distribution of tasks between various processes of the lower level. Examples of such processes are production management processes (production planning, service planning), processes of managing resources, documents, etc.</li><li><span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">Enterprise management level or strategic level.</span></span> Automation of the processes of the enterprise management level provides the solution of analytical and forecast tasks. This level of automation is necessary to support the work of top management of the organization. It is aimed at financial, economic and strategic management.</li></ul>\r\nAutomation of processes at each of these levels is provided through the use of various automation systems (CRM systems, ERP systems, OLAP systems, etc.). All automation systems can be divided into three basic types.\r\nTypes of automation systems include:\r\n<ul><li><span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">immutable systems.</span></span> These are systems in which the sequence of actions is determined by the configuration of the equipment or process conditions and cannot be changed during the process.</li><li><span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">programmable systems.</span></span> These are systems in which the sequence of actions may vary depending on a given program and process configuration. The selection of the necessary sequence of actions is carried out through a set of instructions that can be read and interpreted by the system.</li><li><span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">flexible (self-adjusting) systems.</span></span> These are systems that are able to carry out the selection of necessary actions in the process of work. Changing the configuration of the process (sequence and conditions of operations) is based on information about the process.</li></ul>\r\nThese types of systems can be applied at all levels of process automation individually or as part of a combined system.\r\n<span style=\"font-weight: bold; \">What are the types of automated processes?</span>\r\nIn each sector of the economy, there are enterprises and organizations that produce products or provide services. All these enterprises can be divided into three groups, depending on their “remoteness” in the natural resource processing chain.\r\nThe first group of enterprises is enterprises that extract or produce natural resources. Such enterprises include, for example, agricultural producers, oil and gas companies.\r\nThe second group of enterprises is enterprises that process natural raw materials. They make products from raw materials mined or produced by enterprises of the first group. Such enterprises include, for example, automobile industry enterprises, steel enterprises, electronic industry enterprises, power plants, etc.\r\nThe third group is service enterprises. Such organizations include, for example, banks, educational institutions, medical institutions, restaurants, etc.\r\nFor all enterprises, we can distinguish common groups of processes associated with the production of products or the provision of services.\r\nThese processes include:\r\n<ul><li>business processes;</li><li>design and development processes;</li><li>production processes;</li><li>control and analysis processes.</li></ul>\r\n<span style=\"font-weight: bold;\">What are the benefits of process automation?</span>\r\nProcess automation can significantly improve the quality of management and product quality. With the implementation of the QMS, automation gives a significant effect and enables the organization to significantly improve its work. However, before deciding on process automation, it is necessary to evaluate the benefits of running processes in an automatic mode.\r\nTypically, process automation provides the following benefits:\r\n<ul><li><span style=\"font-style: italic;\"><span style=\"font-weight: bold;\">the speed of completing repetitive tasks increases.</span></span> Due to the automatic mode, the same tasks can be completed faster because automated systems are more accurate in operations and are not prone to a decrease in performance from the time of work.</li><li><span style=\"font-style: italic;\"><span style=\"font-weight: bold;\">quality of work increases.</span></span> The exclusion of the human factor significantly reduces variations in the execution of the process, which leads to a decrease in the number of errors and, accordingly, increases the stability and quality of the process.</li><li><span style=\"font-style: italic;\"><span style=\"font-weight: bold;\">increases control accuracy.</span></span> Due to the use of information technology in automated systems, it becomes possible to save and take into account a greater amount of process data than with manual control.</li><li><span style=\"font-style: italic;\"><span style=\"font-weight: bold;\">parallel tasks.</span></span> Automated systems allow you to perform several actions at the same time without loss of quality and accuracy. This speeds up the process and improves the quality of the results.</li><li><span style=\"font-style: italic;\"><span style=\"font-weight: bold;\">quick decision making in typical situations.</span></span> In automated systems, decisions related to typical situations are made much faster than with manual control. This improves the performance of the process and avoids inconsistencies in subsequent stages.</li></ul>","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/Process_Automation_Middleware.png","alias":"process-automation-middleware"},"227":{"id":227,"title":"Advanced Analytics Software","description":" <span style=\"font-weight: bold;\">Advanced analytics</span> is a part of data science that uses high-level methods and tools to focus on projecting future trends, events, and behaviors. This gives organizations the ability to perform advanced statistical models such as ‘what-if’ calculations, as well as future-proof various aspects of their operations.\r\nThe term is an umbrella for several sub-fields of analytics that work together in their predictive capabilities.\r\nThe major areas that make up advanced analytics are predictive data analytics, big data, and data mining. The process of advanced analytics includes all three areas at various times.\r\n<span style=\"font-weight: bold;\">Data mining</span> is a key aspect of advanced analytics, providing the raw data that will be used by both big data and predictive analytics. <span style=\"font-weight: bold;\">Big data analytics</span> are useful in finding existing insights and creating connections between data points and sets, as well as cleaning data.\r\nFinally, <span style=\"font-weight: bold;\">predictive analytics</span> can use these clean sets and existing insights to extrapolate and make predictions and projections about future activity, trends, and consumer behaviors.\r\nAdvanced analytics also include newer technologies such as machine learning and artificial intelligence, semantic analysis, visualizations, and even neural networks. Taken together, they help advanced analytics software create an accurate enough canvas to make reliable predictions and generate actionable BI insights on a deeper level.","materialsDescription":"<h1 class=\"align-center\">A list of tips on how to manage the process of building an advanced analytics program</h1>\r\n<ul><li>Start with a proof-of-concept project to demonstrate the potential business value of analytics applications.</li><li>Take training seriously. New data management and analytics skills likely will be needed, especially if big data platforms and tools like SAS advanced analytics tools are involved.</li><li>Develop processes to ensure that business units are ready to act on analytical findings so the work of data scientists and other analysts doesn't go to waste.</li><li>Monitor and assess advanced and predictive analytics software on a regular basis to make sure the data being analyzed is still relevant and the analytical models being run against it are still valid.</li></ul>\r\n<h1 class=\"align-center\">Advanced analytics tools</h1>\r\nThere are a variety of advanced analytics tools to choose from that offer different advantages based on the use case. They generally break down into two categories: open source and proprietary.\r\nOpen source tools have become a go-to option for many data scientists doing machine learning and prescriptive analytics. They include programming languages, as well as computing environments, including Hadoop and Spark. Users typically say they like open source advanced analytics tools because they are generally inexpensive to operate, offer strong functionality and are backed by a user community that continually innovates the tools.\r\nOn the proprietary side, vendors including Microsoft, IBM and the SAS Institute all offer advanced analytics tools. Most require a deep technical background and understanding of mathematical techniques.\r\nIn recent years, however, a crop of self-service analytics tools has matured to make functionality more accessible to business users. Tableau, in particular, has become a common tool. While its functionality is more limited than deeper technical tools, it does enable users to conduct cluster analyses and other advanced analyses.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/Advanced_Analytics_Software.png","alias":"advanced-analytics-software"},"231":{"id":231,"title":"Deployment-Centric Application Platforms","description":" Deployment-centric application platforms are driving benefits for organizations embarking on their digital transformation journey.\r\nAs demand for applications increases, businesses need to make sure they have an effective application development platform in place to help them continue to capitalize on the benefits they can provide and meet customer demand. This platform has an integrated development environment that provides tools that allow the developer to program, test and implement applications.","materialsDescription":" <span style=\"font-weight: bold;\">What is software deployment?</span>\r\nSoftware deployment is all of the activities that make a software system available for use.\r\nThe general deployment process consists of several interrelated activities with possible transitions between them. These activities can occur at the producer side or on the consumer side or both. Because every software system is unique, the precise processes or procedures within each activity can hardly be defined. Therefore, "deployment" should be interpreted as a general process that has to be customized according to specific requirements or characteristics.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/Deployment-Centric_Application_Platforms.png","alias":"deployment-centric-application-platforms"},"233":{"id":233,"title":"Transaction Processing Monitors","description":" <span style=\"font-weight: bold; \">A transaction processing monitor (TPM)</span> is a program that monitors transactions from one stage to the next, ensuring that each one completes successfully; if not, or if an error occurs, the TM Monitor takes the appropriate action. A transaction processing monitor’s main purpose/objective is to allow resource sharing and assure optimal use of the resources by applications. This term is sometimes shortened to TP monitor.\r\nA transaction processing monitor is critical in multi-tier architectures. With processes running on different platforms, a given transaction may be forwarded to any one of several servers. Generally, the TP monitor handles all load balancing. After completing each transaction, the TPM can process another transaction without being influenced by the prior transaction. In other words the TPM model essentially is stateless.\r\n<p class=\"align-center\"><span style=\"font-weight: bold; \">Transaction Processing Monitor architecture</span></p>\r\n<p class=\"align-left\">In the TP Monitor Architecture, ACID transactions are initiated by a Begin-Transaction call and terminated by either a Commit-Transaction or an Abort-Transaction call. When initiated, each transaction is assigned a unique identiier and entered into a transaction table managed by the Transaction Manager. Each entry in the transaction table contains the transaction identiier (TRID), the transaction status, and other information. When a transaction calls a transaction control operation, such as Commit-Transaction, the Transaction Manager is responsible for carrying out the execution of the command and recording information in the transaction table. </p>\r\n<ul><li><span style=\"font-weight: bold; \">Process per client model</span> - instead of individual login session per terminal, server process communicates with the terminal, handles authentication, and executes actions.</li><li><span style=\"font-weight: bold; \">Single process model</span> - all remote terminals connect to a single server process. Used in client-server environments.</li><li><span style=\"font-weight: bold; \">Many-server single-router model</span> - multiple application server processes access a common database; clients communicate with the application through a single communication process that routes requests.</li><li><span style=\"font-weight: bold; \">Many server many-router model</span> - multiple processes communicate with clients.</li></ul>\r\n<p class=\"align-left\"><span style=\"font-weight: bold;\">In general, a TPM provides the following functionality:</span></p>\r\n<ul><li>Coordinating resources</li><li>Balancing loads</li><li>Creating new processes as/when needed</li><li>Providing secure access to services</li><li>Routing services</li><li>Wrapping data messages into messages</li><li>Unwrapping messages into data packets/structures</li><li>Monitoring operations/transactions</li><li>Managing queues</li><li>Handling errors through such actions as process restarting</li><li>Hiding interprocess communications details from programmers</li></ul>\r\n<br /><br /><br />\r\n\r\n\r\n","materialsDescription":"<h1 class=\"align-center\">Advantages of TP Monitors </h1>\r\nComplex applications are often built on top of several resource managers (such as DBMSs, operating systems, user interfaces, and messaging software). A TPM is a middleware component that provides access to the services of a number of resource managers and provides a uniform interface for programmers who are developing transactional software. \r\n<ul><li>Transaction routing the TP Monitor can increase scalability by directing transactions to specific DBMSs. </li><li> Managing distributed transactions the TP Monitor can manage transactions that require access to data held in multiple, possibly heterogeneous, DBMSs. For example, a transaction my require to update data items held in an Oracle DBMS at site 1, an Informix DBMS at site 2,and an IMS DBMS at site 3. TP Monitors normally control transactions using the X/Open Distributed Transactions Processing (DTP) standard. A DBMS that supports this standard can function as a resource manager under the control of a TP Monitor acting as a transaction manager.</li><li> Load balancing the TP Monitor can balance client requests across multiple DBMSs on one or more computers by directing client service calls to the least loaded server. In addition, it can dynamically bring in additional DBMSs as required to provide the necessary performance.</li><li>Funnelling in environments with a large number of users, it may sometimes be difficult for all users to be logged on simultaneously to the DBMS. In many cases, we would find that users generally do not need continuous access to the DBMS. Instead of each user connecting to the DBMS, the TP Monitor can establish connections with the DBMSs as and when required, and can funnel user requests through these connections. This allows a larger number of users to access the available DBMSs with a potentially much smaller number of connections, which in turn would mean less resource usage. </li><li> Increased reliability the TP Monitor acts as a transaction manager, performing the necessary actions to maintain the consistency of the database, with the DBMS acting as a resource manager. If the DBMS fails, the TP Monitor may be able to resubmit the transaction to another DBMS or can hold the transaction until the DBMS becomes available again.</li></ul>\r\nTP Monitors are typically used in environments with a very high volume of transactions, where the TP Monitor can be used to offload processes from the DBMS server. Prominent examples of TP Monitors include CICS and Encina from IBM (which are primarily used on IBM AIX or Windows NT and bundled now in the IBM TXSeries) and Tuxedo from BEA Systems.\r\n\r\n","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/Transaction_Processing_Monitors.png","alias":"transaction-processing-monitors"},"239":{"id":239,"title":"Relational Database Management Systems","description":" Relational Database Management System (RDBMS) is a DBMS designed specifically for relational databases. Therefore, RDBMSes are a subset of DBMSes.\r\nA relational database refers to a database that stores data in a structured format, using rows and columns. This makes it easy to locate and access specific values within the database. It is "relational" because the values within each table are related to each other. Tables may also be related to other tables. The relational structure makes it possible to run queries across multiple tables at once.\r\nWhile a relational database describes the type of database an RDMBS manages, the RDBMS refers to the database program itself. It is the software that executes queries on the data, including adding, updating, and searching for values.\r\nAn RDBMS may also provide a visual representation of the data. For example, it may display data in a tables like a spreadsheet, allowing you to view and even edit individual values in the table. Some relational database softwareallow you to create forms that can streamline entering, editing, and deleting data.\r\nMost well known DBMS applications fall into the RDBMS category. Examples include Oracle Database, MySQL, Microsoft SQL Server, and IBM DB2. Some of these programs support non-relational databases, but they are primarily used for relational database management.\r\nExamples of non-relational databases include Apache HBase, IBM Domino, and Oracle NoSQL Database. These type of databases are managed by other DMBS programs that support NoSQL, which do not fall into the RDBMS category.\r\nElements of the relational DBMS that overarch the basic relational database are so intrinsic to operations that it is hard to dissociate the two in practice.\r\nThe most basic features of RDBMS are related to create, read, update and delete operations, collectively known as CRUD. They form the foundation of a well-organized system that promotes consistent treatment of data.\r\nThe RDBMS typically provides data dictionaries and metadata collections useful in data handling. These programmatically support well-defined data structures and relationships. Data storage management is a common capability of the RDBMS, and this has come to be defined by data objects that range from binary large object (blob) strings to stored procedures. Data objects like this extend the scope of basic relational database operations and can be handled in a variety of ways in different RDBMSes.\r\nThe most common means of data access for the RDBMS is via SQL. Its main language components comprise data manipulation language (DML) and data definition language (DDL) statements. Extensions are available for development efforts that pair SQL use with common programming languages, such as COBOL (Common Business-Oriented Language), Java and .NET.\r\nRDBMSes use complex algorithms that support multiple concurrent user access to the database, while maintaining data integrity. Security management, which enforces policy-based access, is yet another overlay service that the RDBMS provides for the basic database as it is used in enterprise settings.\r\nRDBMSes support the work of database administrators (DBAs) who must manage and monitor database activity. Utilities help automate data loading and database backup. RDBMS systems manage log files that track system performance based on selected operational parameters. This enables measurement of database usage, capacity and performance, particularly query performance. RDBMSes provide graphical interfaces that help DBAs visualize database activity.\r\nRelational database management systems are central to key applications, such as banking ledgers, travel reservation systems and online retailing. As RDBMSes have matured, they have achieved increasingly higher levels of query optimization, and they have become key parts of reporting, analytics and data warehousing applications for businesses as well. \r\nRDBMSes are intrinsic to operations of a variety of enterprise applications and are at the center of most master data management (MDM) systems.<br /><br />","materialsDescription":"<h1 class=\"align-center\"> <span style=\"font-weight: normal;\">What are the advantages of a Relational Database Management System?</span></h1>\r\nA Relational Database Management System (RDBMS) is a software system that provides access to a relational database. The software system is a collection of software applications that can be used to create, maintain, manage and use the database. A "relational database" is a database structured on the "relational" model. Data are stored and presented in a tabular format, organized in rows and columns with one record per row.\r\n\r\n<ul><li><span style=\"font-weight: bold;\">Data Structure.</span> The table format is simple and easy for database users to understand and use. Relational database management software provide data access using a natural structure and organization of the data. Database queries can search any column for matching entries.</li></ul>\r\n<dl></dl>\r\n<ul><li><span style=\"font-weight: bold;\">Multi-User Access.</span> RDBMS database program allow multiple database users to access a database simultaneously. Built-in locking and transactions management functionality allow users to access data as it is being changed, prevents collisions between two users updating the data, and keeps users from accessing partially updated records.</li></ul>\r\n\r\n<ul><li><span style=\"font-weight: bold;\">Privileges. </span>Authorization and privilege control features in an RDBMS allow the database administrator to restrict access to authorized users, and grant privileges to individual users based on the types of database tasks they need to perform. Authorization can be defined based on the remote client IP address in combination with user authorization, restricting access to specific external computer systems.</li></ul>\r\n\r\n<ul><li><span style=\"font-weight: bold;\">Network Access.</span> RDBMSs provide access to the database through a server daemon, a specialized software program that listens for requests on a network, and allows database clients to connect to and use the database. Users do not need to be able to log in to the physical computer system to use the database, providing convenience for the users and a layer of security for the database. Network access allows developers to build desktop tools and Web applications to interact with databases.</li></ul>\r\n\r\n<ul><li><span style=\"font-weight: bold;\">Speed.</span> The relational database model is not the fastest data structure. RDBMS software advantages, such as simplicity, make the slower speed a fair trade-off. Optimizations built into an RDBMS, and the design of the databases, enhance performance, allowing RDBMSs to perform more than fast enough for most applications and data sets. Improvements in technology, increasing processor speeds and decreasing memory and storage costs allow systems administrators to build incredibly fast systems that can overcome any database performance shortcomings.</li></ul>\r\n\r\n<ul><li><span style=\"font-weight: bold;\">Maintenance. </span>RDBMSs feature maintenance utilities that provide database administrators with tools to easily maintain, test, repair and back up the databases housed in the system. Many of the functions can be automated using built-in automation in the RDBMS, or automation tools available on the operating system.</li></ul>\r\n\r\n<ul><li><span style=\"font-weight: bold;\">Language.</span> RDBMSs support a generic language called "Structured Query Language" (SQL). The SQL syntax is simple, and the language uses standard English language keywords and phrasing, making it fairly intuitive and easy to learn. Many RDBMSs add non-SQL, database-specific keywords, functions and features to the SQL language.</li></ul>\r\n\r\n","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/Relational_Database_Management_Systems.png","alias":"relational-database-management-systems"},"240":{"id":240,"title":"Non-relational Database Management Systems","description":" A non-relational database is a database that does not incorporate the table/key model that relational database management systems (RDBMS) promote. These kinds of databases require data manipulation techniques and processes designed to provide solutions to big data problems that big companies face. The most popular emerging non-relational database is called NoSQL (Not Only SQL).\r\nMost non-relational databases are incorporated into websites such as Google, Yahoo!, Amazon and Facebook. These websites introduce a slew of new applications every single day with millions and millions of users, so they would not be able to handle large traffic spikes with existing RDBMS solutions. Since RDBMS cannot handle the problem, they’ve switched to a new kind of DBMS that is capable of handling Web-scale data in a non-relational way.<br /><br />An interesting aspect of a non-relational database such as NoSQL is scalability. NoSQL uses the BASE system (basically available, soft-state, eventually consistent). Non-relational databases forgo the table form of rows and columns relational databases use in favor of specialized frameworks to store data, which can be accessed by special query APIs. Persistence is an important element in these databases. To enable fast throughput of vast amounts of data the best option for performance is "in memory," rather than reading and writing from disks.<br /><br />Relational databases use the ACID system, which ensures consistency of data in all situations of data management but obviously takes longer to process because of all those relations and its branching nature. However, the BASE system loosened up the requirements on consistency to achieve better availability and partitioning for better scalability.","materialsDescription":" <span style=\"font-weight: bold; \">What are NoSQL databases?</span>\r\nNoSQL databases are purpose built for specific data models and have flexible schemas for building modern applications. NoSQL databases are widely recognized for their ease of development, functionality, and performance at scale. They use a variety of data models, including document, graph, key-value, in-memory, and search. This page includes resources to help you better understand NoSQL databases and to get started.\r\n<span style=\"font-weight: bold; \">How Does a NoSQL (nonrelational) Database Work?</span>\r\nNoSQL databases use a variety of data models for accessing and managing data, such as document, graph, key-value, in-memory, and search. These types of databases are optimized specifically for applications that require large data volume, low latency, and flexible data models, which are achieved by relaxing some of the data consistency restrictions of other databases.\r\nConsider the example of modeling the schema for a simple book database:\r\n<ul><li>In a relational database, a book record is often dissembled (or “normalized”) and stored in separate tables, and relationships are defined by primary and foreign key constraints. In this example, the Books table has columns for ISBN, Book Title, and Edition Number, the Authors table has columns for AuthorID and Author Name, and finally the Author-ISBN table has columns for AuthorID and ISBN. The relational model is designed to enable the database to enforce referential integrity between tables in the database, normalized to reduce the redundancy, and generally optimized for storage.</li><li>In a NoSQL database, a book record is usually stored as a JSON document. For each book, the item, ISBN, Book Title, Edition Number, Author Name, and AuthorID are stored as attributes in a single document. In this model, data is optimized for intuitive development and horizontal scalability.</li></ul>\r\n<span style=\"font-weight: bold; \">Why should you use a NoSQL database?</span>\r\nNoSQL databases are a great fit for many modern applications such as mobile, web, and gaming that require flexible, scalable, high-performance, and highly functional databases to provide great user experiences.\r\n<ul><li><span style=\"font-weight: bold; \">Flexibility:</span> NoSQL databases generally provide flexible schemas that enable faster and more iterative development. The flexible data model makes NoSQL databases ideal for semi-structured and unstructured data.</li><li><span style=\"font-weight: bold; \">Scalability:</span> NoSQL databases are generally designed to scale out by using distributed clusters of hardware instead of scaling up by adding expensive and robust servers. Some cloud providers handle these operations behind-the-scenes as a fully managed service.</li><li><span style=\"font-weight: bold; \">High-performance:</span> NoSQL database are optimized for specific data models (such as document, key-value, and graph) and access patterns that enable higher performance than trying to accomplish similar functionality with relational databases.</li><li><span style=\"font-weight: bold; \">Highly functional:</span> NoSQL databases provide highly functional APIs and data types that are purpose built for each of their respective data models.</li></ul>\r\n<span style=\"font-weight: bold;\">What are the types of NoSQL Databases?</span>\r\n<ul><li><span style=\"font-weight: bold;\">Key-value:</span> Key-value databases are highly partitionable and allow horizontal scaling at scales that other types of databases cannot achieve. Use cases such as gaming, ad tech, and IoT lend themselves particularly well to the key-value data model.</li><li><span style=\"font-weight: bold;\">Document:</span> In application code, data is represented often as an object or JSON-like document because it is an efficient and intuitive data model for developers. Document databases make it easier for developers to store and query data in a database by using the same document model format that they use in their application code. The flexible, semistructured, and hierarchical nature of documents and document databases allows them to evolve with applications’ needs. The document model works well with catalogs, user profiles, and content management systems where each document is unique and evolves over time.</li><li><span style=\"font-weight: bold;\">Graph:</span> A graph database’s purpose is to make it easy to build and run applications that work with highly connected datasets. Typical use cases for a graph database include social networking, recommendation engines, fraud detection, and knowledge graphs.</li><li><span style=\"font-weight: bold;\">In-memory:</span> Gaming and ad-tech applications have use cases such as leaderboards, session stores, and real-time analytics that require microsecond response times and can have large spikes in traffic coming at any time. Amazon ElastiCache offers Memcached and Redis, to serve low-latency, high-throughput workloads, such as McDonald’s, that cannot be served with disk-based data stores.</li><li><span style=\"font-weight: bold;\">Search:</span> Many applications output logs to help developers troubleshoot issues.</li></ul>","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/Nonrelational_Database_Management_Systems1.png","alias":"non-relational-database-management-systems"},"243":{"id":243,"title":"Database Development and Management Tools","description":" Many companies create various multi-functional applications to facilitate the management, development and administration of databases.\r\nMost relational databases consist of two separate components: a “back-end” where data is stored and a “front-end” —a user interface for interacting with data. This type of design is smart enough, as it parallels a two-level programming model that separates the data layer from the user interface and allows you to concentrate the software market directly on improving its products. This model opens doors for third parties who create their own applications for interacting with various databases.\r\nDatabase development tools can be used to create varieties of the following programs:\r\n<ul><li>client programs;</li><li>database servers and their individual components;</li><li>custom applications.</li></ul>\r\nThe programs of the first and second types are rather small since they are intended mainly for system programmers. The third type packages are much larger, but smaller than full-featured DBMS.\r\nThe development tools for custom applications include programming systems, various program libraries for various programming languages, and development automation packages (including client-server systems).<br />Database management system, abbr. DBMS (Eng. Database Management System, abbr. DBMS) - a set of software and linguistic tools for general or special purposes, providing management of the creation and use of databases.\r\nDBMS - a set of programs that allow you to create a database (DB) and manipulate data (insert, update, delete and select). The system ensures the safety, reliability of storage and data integrity, as well as provides the means to administer the database.","materialsDescription":" <span style=\"font-weight: bold;\">The main functions of the DBMS:</span>\r\n<ul><li>data management in external memory (on disk);</li><li>data management in RAM using disk cache;</li><li>change logging, backup and recovery of databases after failures;</li><li>support for database languages (data definition language, data manipulation language).</li></ul>\r\n<span style=\"font-weight: bold;\">The composition of the DBMS:</span>\r\nUsually, a modern DBMS contains the following components:\r\n<ul><li>the core, which is responsible for managing data in external and RAM and logging;</li><li>database language processor, which provides the optimization of requests for the extraction and modification of data and the creation, as a rule, of a machine-independent executable internal code;</li><li>a run-time support subsystem that interprets data manipulation programs that create a user interface with a DBMS;<br />service programs (external utilities) that provide a number of additional capabilities for maintaining an information system.</li></ul>","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/Database_Development_and_Management_Tools.png","alias":"database-development-and-management-tools"},"245":{"id":245,"title":"Data Integration and Access Software","description":" Data integration involves combining data residing in different sources and providing users with a unified view of them. This process becomes significant in a variety of situations, which include both commercial (such as when two similar companies need to merge their databases) and scientific (combining research results from different bioinformatics repositories, for example) domains. Data integration appears with increasing frequency as the volume (that is, big data) and the need to share existing data explodes. It has become the focus of extensive theoretical work, and numerous open problems remain unsolved. Data integration encourages collaboration between internal as well as external users.\r\nData integration is the process of combining data from different sources into a single, unified view. Integration begins with the ingestion process, and includes steps such as cleansing, ETL mapping, and transformation. Data integration ultimately enables analytics tools to produce effective, actionable business intelligence.\r\nThere is no universal approach to data integration. However, data integration solutions typically involve a few common elements, including a network of data sources, a master server, and clients accessing data from the master server.\r\nIn a typical data integration process, the client sends a request to the master server for data. The master server then intakes the needed data from internal and external sources. The data is extracted from the sources, then consolidated into a single, cohesive data set. This is served back to the client for use.","materialsDescription":" <span style=\"font-weight: bold;\">Integration helps businesses succeed</span>\r\nEven if a company is receiving all the data it needs, that data often resides in a number of separate data sources. For example, for a typical customer 360 view use case, the data that must be combined may include data from their CRM systems, web traffic, marketing operations software, customer — facing applications, sales and customer success systems, and even partner data, just to name a few. Information from all of those different sources often needs to be pulled together for analytical needs or operational actions, and that can be no small task for data engineers or developers to bring them all together.\r\nLet’s take a look at a typical analytical use case. Without unified data, a single report typically involves logging into multiple accounts, on multiple sites, accessing data within native apps, copying over the data, reformatting, and cleansing, all before analysis can happen.\r\nConducting all these operations as efficiently as possible highlights the importance of data integration. It also showcases the major benefits of a well thought-out approach to data integration:\r\n<span style=\"font-style: italic;\"><span style=\"font-weight: bold;\">Improves collaboration and unification of systems</span></span>\r\nEmployees in every department — and sometimes in disparate physical locations — increasingly need access to the company's data for shared and individual projects. IT needs a secure solution for delivering data via self-service access across all lines of business.\r\nAdditionally, employees in almost every department are generating and improving data that the rest of the business needs. Data integration needs to be collaborative and unified in order to improve collaboration and unification across the organization.\r\n<span style=\"font-style: italic;\"><span style=\"font-weight: bold;\">Saves time and boosts efficiency</span></span>\r\nWhen a company takes measures to integrate its data properly, it cuts down significantly on the time it takes to prepare and analyze that data. The automation of unified views cuts out the need for manually gathering data, and employees no longer need to build connections from scratch whenever they need to run a report or build an application.\r\nAdditionally, using the right tools, rather than hand-coding the integration, returns even more time (and resources overall) to the dev team.\r\nAll the time saved on these tasks can be put to other, better uses, with more hours earmarked for analysis and execution to make an organization more productive and competitive.\r\n<span style=\"font-style: italic;\"><span style=\"font-weight: bold;\">Reduces errors (and rework)</span></span>\r\nThere’s a lot to keep up with when it comes to a company’s data resources. To manually gather data, employees must know every location and account that they might need to explore — and have all necessary software installed before they begin — to ensure their data sets will be complete and accurate. If a data repository is added, and that employee is unaware, they will have an incomplete data set.\r\nAdditionally, without a data integration solution that synchronizes data, reporting must be periodically redone to account for any changes. With automated updates, however, reports can be run easily in real time, whenever they’re needed.\r\n<span style=\"font-style: italic;\"><span style=\"font-weight: bold;\">Delivers more valuable data</span></span>\r\nData integration efforts actually improve the value of a business’ data over time. As data is integrated into a centralized system, quality issues are identified and necessary improvements are implemented, which ultimately results in more accurate data — the foundation for quality analysis.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/Data_Integration_and_Access_Software.png","alias":"data-integration-and-access-software"},"255":{"id":255,"title":"Modeling and Architecture Tools","description":" Enterprise Architecture (EA) is a discipline that has gained and will continue to gain huge importance to master enterprise organization and its IT support.\r\nEnterprise Architecture is a complete expression of the enterprise, a master plan which “acts as a collaboration force” between aspects of business planning (such as goals, visions, strategies and governance principles), aspects of business operations (such as business terms, organization structures, processes, and data), aspects of automation (such as information systems and databases) and the enabling technological infrastructure of the business (such as computers, operating systems, and networks).\r\nEnterprise architects use various business methods, analytical techniques and conceptual tools to understand and document the structure and dynamics of an enterprise. In doing so, they produce lists, drawings, documents, and models, together called "artifacts". These artifacts describe the logical organization of business functions, business capabilities, business processes, people, information resources, business systems, software applications, computing capabilities, information exchange and communications infrastructure within the enterprise.","materialsDescription":" <span style=\"font-weight: bold; \">What is enterprise architecture?</span>\r\nEnterprise architecture (EA) is "a well-defined practice for conducting enterprise analysis, design, planning, and implementation, using a comprehensive approach at all times, for the successful development and execution of strategy. Enterprise architecture applies architecture principles and practices to guide organizations through the business, information, process, and technology changes necessary to execute their strategies. These practices utilize the various aspects of an enterprise to identify, motivate, and achieve these changes."\r\nPractitioners of enterprise architecture, enterprise architects, are responsible for performing the analysis of business structure and processes and are often called upon to draw conclusions from the information collected to address the goals of enterprise architecture: effectiveness, efficiency, agility, and continuity of complex business operations.\r\n<span style=\"font-weight: bold; \">What are the terms "enterprise" and "architecture"?</span>\r\nThe term enterprise can be defined as describing an organizational unit, organization, or collection of organizations that share a set of common goals and collaborate to provide specific products or services to customers.\r\nIn that sense, the term enterprise covers various types of organizations, regardless of their size, ownership model, operational model, or geographical distribution. It includes those organizations' complete socio-technical systems, including people, information, processes, and technologies.\r\nThe term architecture refers to fundamental concepts or properties of a system in its environment, embodied in its elements, relationships, and in the principles of its design and evolution.\r\nUnderstood as a socio-technical system, the term enterprise defines the scope of enterprise architecture.\r\n<span style=\"font-weight: bold;\">What are the benefits?</span>\r\nThe benefits of enterprise architecture are achieved through its direct and indirect contributions to organizational goals. It has been found that the most notable benefits of enterprise architecture can be observed in the following areas:\r\n<ul><li><span style=\"font-style: italic;\">Organizational design</span> - Enterprise architecture provides support in the areas related to design and re-design of the organizational structures during mergers, acquisitions or during general organizational change.</li><li><span style=\"font-style: italic;\">Organizational processes and process standards</span> - Enterprise architecture helps enforce discipline and standardization of business processes, and enable process consolidation, reuse, and integration.</li><li><span style=\"font-style: italic;\">Project portfolio management</span> - Enterprise architecture supports investment decision-making and work prioritization.</li><li><span style=\"font-style: italic;\">Project management</span> - Enterprise architecture enhances the collaboration and communication between project stakeholders. Enterprise architecture contributes to efficient project scoping and defining more complete and consistent project deliverables.</li><li><span style=\"font-style: italic;\">Requirements Engineering</span> - Enterprise architecture increases the speed of requirement elicitation and the accuracy of requirement definitions, through the publishing of the enterprise architecture documentation.</li><li><span style=\"font-style: italic;\">System development</span> - Enterprise architecture contributes to optimal system designs and efficient resource allocation during system development and testing.</li><li><span style=\"font-style: italic;\">IT management and decision making</span> - Enterprise architecture is found to help enforce discipline and standardization of IT planning activities and to contribute to a reduction in time for technology-related decision making.</li><li><span style=\"font-style: italic;\">IT value</span> - Enterprise architecture helps reduce the system's implementation and operational costs and minimize the replication of IT infrastructure services across business units.</li><li><span style=\"font-style: italic;\">IT complexity</span> - Enterprise architecture contributes to a reduction in IT complexity, consolidation of data and applications, and to better interoperability of the systems.</li><li><span style=\"font-style: italic;\">IT openness</span> - Enterprise architecture contributes to more open and responsive IT as reflected through increased accessibility of data for regulatory compliance, and increased transparency of infrastructure changes.</li><li><span style=\"font-style: italic;\">IT risk management</span> - Enterprise architecture contributes to the reduction of business risks from system failures and security breaches. Enterprise architecture helps reduce risks of project delivery.</li></ul>","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/Modeling_and_Architecture_Tools.png","alias":"modeling-and-architecture-tools"},"261":{"id":261,"title":"Automated Software Quality Tools","description":" Some software testing tasks, such as extensive low-level interface regression testing, can be laborious and time-consuming to do manually. In addition, a manual approach might not always be effective in finding certain classes of defects. Test automation offers a possibility to perform these types of testing effectively. Once automated tests have been developed, they can be run quickly and repeatedly. Many times, this can be a cost-effective method for regression testing of software products that have a long maintenance life. Even minor patches over the lifetime of the application can cause existing features to break which were working at an earlier point in time.\r\nThere are many approaches to test automation, however below are the general approaches used widely:\r\n<ul><li>Graphical user interface testing. A testing framework that generates user interface events such as keystrokes and mouse clicks, and observes the changes that result in the user interface, to validate that the observable behavior of the program is correct.</li><li>API driven testing. A testing framework that uses a programming interface to the application to validate the behaviour under test. Typically API driven testing bypasses application user interface altogether. It can also be testing public (usually) interfaces to classes, modules or libraries are tested with a variety of input arguments to validate that the results that are returned are correct.</li></ul>\r\nTest automation tools can be expensive, and are usually employed in combination with manual testing. Test automation can be made cost-effective in the long term, especially when used repeatedly in regression testing. A good candidate for test automation is a test case for common flow of an application, as it is required to be executed (regression testing) every time an enhancement is made in the application. Test automation reduces the effort associated with manual testing. Manual effort is needed to develop and maintain automated checks, as well as reviewing test results.\r\nIn automated testing the test engineer or software quality assurance person must have software coding ability, since the test cases are written in the form of source code which, when run, produce output according to the assertions that are a part of it. Some test automation tools allow for test authoring to be done by keywords instead of coding, which do not require programming.\r\nOne way to generate test cases automatically is model-based testing through use of a model of the system for test case generation, but research continues into a variety of alternative methodologies for doing so. In some cases, the model-based approach enables non-technical users to create automated business test cases in plain English so that no programming of any kind is needed in order to configure them for multiple operating systems, browsers, and smart devices.\r\nWhat to automate, when to automate, or even whether one really needs automation are crucial decisions which the testing (or development) team must make. A multi-vocal literature review of 52 practitioner and 26 academic sources found that five main factors to consider in test automation decision are: 1) System Under Test (SUT), 2) the types and numbers of tests, 3) test-tool, 4) human and organizational topics, and 5) cross-cutting factors. The most frequent individual factors identified in the study were: need for regression testing, economic factors, and maturity of SUT.","materialsDescription":" <span style=\"font-weight: bold;\">Unit testing</span>\r\nA growing trend in software development is the use of unit testing frameworks such as the xUnit frameworks (for example, JUnit and NUnit) that allow the execution of unit tests to determine whether various sections of the code are acting as expected under various circumstances. Test cases describe tests that need to be run on the program to verify that the program runs as expected.\r\nTest automation mostly using unit testing is a key feature of extreme programming and agile software development, where it is known as test-driven development (TDD) or test-first development. Unit tests can be written to define the functionality before the code is written. However, these unit tests evolve and are extended as coding progresses, issues are discovered and the code is subjected to refactoring. Only when all the tests for all the demanded features pass is the code considered complete. Proponents argue that it produces software that is both more reliable and less costly than code that is tested by manual exploration. It is considered more reliable because the code coverage is better, and because it is run constantly during development rather than once at the end of a waterfall development cycle. The developer discovers defects immediately upon making a change, when it is least expensive to fix. Finally, code refactoring is safer when unit testing is used; transforming the code into a simpler form with less code duplication, but equivalent behavior, is much less likely to introduce new defects when the refactored code is covered by unit tests.\r\n<span style=\"font-weight: bold;\">Graphical User Interface (GUI) testing</span>\r\nMany test automation tools provide record and playback features that allow users to interactively record user actions and replay them back any number of times, comparing actual results to those expected. The advantage of this approach is that it requires little or no software development. This approach can be applied to any application that has a graphical user interface. However, reliance on these features poses major reliability and maintainability problems. Relabelling a button or moving it to another part of the window may require the test to be re-recorded. Record and playback also often adds irrelevant activities or incorrectly records some activities.\r\nA variation on this type of tool is for testing of web sites. Here, the "interface" is the web page. However, such a framework utilizes entirely different techniques because it is rendering HTML and listening to DOM Events instead of operating system events. Headless browsers or solutions based on Selenium Web Driver are normally used for this purpose.\r\nAnother variation of this type of test automation tool is for testing mobile applications. This is very useful given the number of different sizes, resolutions, and operating systems used on mobile phones. For this variation, a framework is used in order to instantiate actions on the mobile device and to gather results of the actions.\r\nAnother variation is script-less test automation that does not use record and playback, but instead builds a model of the application and then enables the tester to create test cases by simply inserting test parameters and conditions, which requires no scripting skills.\r\n<span style=\"font-weight: bold; \">API driven testing</span>\r\nAPI testing is also being widely used by software testers due to the difficulty of creating and maintaining GUI-based automation testing. It involves directly testing APIs as part of integration testing, to determine if they meet expectations for functionality, reliability, performance, and security. Since APIs lack a GUI, API testing is performed at the message layer. API testing is considered critical when an API serves as the primary interface to application logic since GUI tests can be difficult to maintain with the short release cycles and frequent changes commonly used with agile software development and DevOps.\r\n<span style=\"font-weight: bold;\">Continuous testing</span>\r\nContinuous testing is the process of executing automated tests as part of the software delivery pipeline to obtain immediate feedback on the business risks associated with a software release candidate. For Continuous Testing, the scope of testing extends from validating bottom-up requirements or user stories to assessing the system requirements associated with overarching business goals.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/Automated_Software_Quality_Tools1.png","alias":"automated-software-quality-tools"},"263":{"id":263,"title":"Software Configuration Management Tools","description":"<span style=\"font-weight: bold; \">Software configuration management</span> (SCM) is a set of processes, policies, and tools that organizes the development process. It simultaneously maintains the current state of the software (called the “baseline”), while enabling developers to work on new versions for features or fixes.\r\nIn software engineering, <span style=\"font-weight: bold; \">software configuration management process</span> is the task of tracking and controlling changes in the software, part of the larger cross-disciplinary field of configuration management. SCM practices include revision control and the establishment of baselines. If something goes wrong, SCM can determine what was changed and who changed it. If a configuration is working well, SCM can determine how to replicate it across many hosts. \r\nThe acronym "SCM" is also expanded as <span style=\"font-weight: bold; \">source configuration management process</span> and <span style=\"font-weight: bold; \">software change and configuration management.</span> However, "configuration" is generally understood to cover changes typically made by a system administrator. \r\nSoftware configuration control usually includes the updates and the versions that have been applied to software packages, as well as locations and network addresses of hardware devices. When a system needs a software or hardware upgrade, the user can access the configuration management program and database to see what is currently installed and then make a more informed decision about the upgradation needed. Configuration management tools list have been divided into three categories: <span style=\"font-weight: bold;\">tracking</span>, <span style=\"font-weight: bold;\">version management</span>, and r<span style=\"font-weight: bold;\">elease tools.</span>\r\nThe SCM configuration management traces changes and verifies that the software has all of the planned changes that are supposed to be included in a new release. It includes four procedures that should be defined for each software project to ensure that a reliable software configuration management process is utilized. The four procedures typically found in a reliable system configuration management tools are:\r\n<span style=\"font-weight: bold; \">Configuration identification. </span>It is the procedure by which attributes are identified that defines all the properties of a configuration item. A configuration item referred to as an object is a product (hardware and/or software) that supports use by an end user. These attributes are recorded in configuration documents or database tables and baselined. A baseline is an approved configuration object, such as a project plan, that has been authorized for implementation.\r\n<span style=\"font-weight: bold; \">Configuration control.</span> It is a set of processes and approval stages required to change a configuration object’s attributes and to rebaseline them.<span style=\"font-weight: bold; \"><br /></span>\r\n<span style=\"font-weight: bold; \">Configuration status documentation. </span>Configuration status accounting is the ability to record and report on the configuration baselines associated with each configuration object at any point in time.\r\n<span style=\"font-weight: bold; \">Configuration audits. </span>Configuration audits are divided into functional and physical configuration audits. An audit occurs at the time of delivery of a project or at the time a change is made. A functional configuration audit is intended to make sure that functional and performance attributes of a configuration object are achieved. A physical configuration audit attempts to ensure that a configuration object is installed based on the requirements of its design specifications.\r\n<span style=\"font-weight: bold; \">The advantages of software configuration management system are:</span>\r\n<ul><li>It reduces redundant work</li><li>It effectively manages simultaneous updates</li><li>It avoids configuration related problems</li><li>It simplifies coordination between team members</li><li>It is helpful in tracking defects</li></ul>\r\n\r\n\r\n\r\n","materialsDescription":"<h1 class=\"align-center\"> What are the outcomes of well-implemented configuration management?</h1>\r\n<ul><li><span style=\"font-weight: bold; \">Disaster Recovery<br /></span></li></ul>\r\nIf the worst does happen, automated configuration management tools ensures that our assets are easily recoverable. The same applies to rollbacks. Configuration management makes it so that when we’ve put out bad code, we can go back to the state of our software before the change.\r\n<ul><li><span style=\"font-weight: bold; \">Uptime and Site Reliability</span></li></ul>\r\nThe term “site reliability” refers to how often your service is up. A frequent cause of downtime is bad deployments, which can be caused by differences in running production servers to test servers. With our configuration managed properly, our test environments can mimic production, so there’s less chance of a nasty surprise.\r\n<ul><li><span style=\"font-weight: bold; \">Easier Scaling</span></li></ul>\r\nProvisioning is the act of adding more resources (usually servers) to our running application. Сonfiguration automation tools ensure that we know what a good state of our service is. That way, when we want to increase the number of servers that we run, it’s simply a case of clicking a button or running a script. The goal is really to make provisioning a non-event.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/Software_Configuration_Management_Tools.png","alias":"software-configuration-management-tools"},"385":{"id":385,"title":"Model-Driven Application Platforms","description":"A model-driven application is a software application that the functions or behaviors are based on, or in control of, some evolutionary applied models of the target things to the application. The applied models are served as a part of the application system in which it can be changed at runtime. The target things are what the application deals with, such as the objects and affairs in business for a business application. Follows the definition of application in TOGAF, a model-driven business application could be described as an IT system that supports business functions and services running on the models of the (things in) business.\r\nThe ideal of the architecture for a model-driven application was first put forward by Tong-Ying Yu on the Enterprise Engineering Forum in 1999, which have been studied and spread through some internet media for a long time. It had influence on the field of enterprise application development in China; there were successful cases of commercial development of enterprise/business applications in the architectural style of a model-driven application. Gartner Group carried out some studies into the subject in 2008; they defined the model-driven packaged applications as "enterprise applications that have explicit metadata-driven models of the supported processes, data and relationships, and that generate runtime components through metadata models, either dynamically interpreted or compiled, rather than hardcoded." The model-driven application architecture is one of few technology trends to driven the next generation of application modernization, that claimed by some industrial researchers in 2012.","materialsDescription":" <span style=\"font-weight: bold; \">What is Model-driven development?</span>\r\nModel-driven development (MDD) is a format to write and implement software quickly, effectively and at minimum cost. The methodology is also known as model-driven software development (MDSD), model-driven engineering (MDE) and model-driven architecture (MDA).\r\nThe MDD approach focuses on the construction of a software model. The model is a diagram that specifies how the software system should work before the code is generated. Once the software is created, it can be tested using model-based testing (MBT) and then deployed.\r\n<span style=\"font-weight: bold; \">What are the benefits of model-driven development?</span>\r\nThe MDD approach provides advantages in productivity over other development methods because the model simplifies the engineering process. It represents the intended behaviors or actions of a software product before coding begins.\r\nThe individuals and teams that work on the software construct models collaboratively. Communication between developers and a product manager, for example, provides clear definitions of what the software is and how it works. Tests, rebuilds and redeployments can be faster when developing multiple applications with MDD than with traditional development.\r\n<span style=\"font-weight: bold; \">What are the core concepts of model-driven development?</span>\r\nModel-driven development is more in-depth than just having a model of the software in development, which makes it different from model-based development. Abstraction and automation are key concepts of MDD. Abstraction means to organize complex software systems. In MDD, complex software gets abstracted, which then extracts the easy-to-define code.\r\nOnce developers transform the abstraction, a working version of the software model gets automated. This automation stage uses a domain-specific language (DSL), such as HTML, and scripting languages, like ColdFusion, which can integrate other programming languages and services -- .NET, C++, FTP and more -- for use in websites. DSL is a language specialized to an application domain. A model is written in a DSL language and is utilized for transformation in coding language from the model to working software.\r\nAgile software development methods are often paired with MDD. The Agile development approach enables short sprints where the project scope can change. Agile model-driven development (AMDD) establishes short development iterations, while changes can be redesigned and shown on the model. In AMDD, design efforts are split between modeling in both sprints and coding.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Model_Driven_Application_Platforms.png","alias":"model-driven-application-platforms"},"387":{"id":387,"title":"Managed File Transfer Software, MFT","description":"<span style=\"font-weight: bold; \">Managed file transfer (MFT)</span> is a technology platform that allows organizations to reliably exchange electronic data between systems and people in a secure way to meet compliance needs. These data movements can be both internal and external to an enterprise and include various types, including sensitive, compliance-protected or high-volume data. It can be offered as software or as a service and may include a single pane for visibility and governance.\r\nManaged file transfer software is a more reliable and efficient means for secure data and file transfer, outpacing and outperforming applications such as file transfer protocol (FTP), hypertext transfer protocol (HTTP), secure file transfer protocol (SFTP) and other methods.\r\nOrganizations increasingly rely on managed file transfer tools to support their business needs and goals in a way that FTP cannot. FTP presents many challenges(PDF, 571 KB) such as data security gaps, lack of visibility when a problem occurs, timely manual recovery from failures and costly SLA fees due to poor performance.\r\n<p class=\"align-left\"><span style=\"font-weight: bold;\">Key capabilities of effective managed file transfer</span></p>\r\n<ul><li><span style=\"font-weight: bold;\">Security.</span> Encrypt internal and external transfers, in motion and at rest. Secure file transfers with advanced capabilities like session breaks and protocol inspection maximize the protection of sensitive data at multiple layers.</li><li><span style=\"font-weight: bold;\">Simplified file transfer.</span> Offers timely and flexible data transfer across a range of file transfer activities and support for multiple file types, including multimedia, PDFs, email, XML, EDI and more.</li><li><span style=\"font-weight: bold;\">Full visibility.</span> Provides a 360-degree view in near-real time. Companies can see who is transferring files, what is being shared and the volume passing through the system. Potential issues like delays and failed transfers are visible before they impact downstream business processes or become missed SLAs.</li><li> <span style=\"font-weight: bold;\">Compliance standards.</span> Strong encryption helps companies avoid compliance failures which can lead to hefty fines. Thorough audit trails confirm regulatory compliance.</li></ul>\r\n<p class=\"align-left\"> </p>","materialsDescription":"<h1 class=\"align-center\">What are the benefits of managed file transfer?</h1>\r\n<span style=\"font-weight: bold; \">Data security.</span> High-profile data breaches and failed transfers can drastically impact a company’s bottom line and reputation. MFT offers a preemptive security strategy with real-time monitoring, and validation security policies and controls to protect data in transit or at rest.\r\n<span style=\"font-weight: bold; \">Data growth.</span> Data is everywhere, and companies face larger and more varied files than in the past. The number of users sharing files has grown as have the number of end-points and devices. And as files get larger, the time to move them over global distances becomes longer. MFT brings reliable, automated governance to the movement of files inside and outside the business and can accelerate big data movements around the globe.\r\n<span style=\"font-weight: bold; \">Regulatory compliance.</span> Legislative and industry requirements such as the Payment Card Industry Data Security Standards (PCI DSS), the Health Insurance Portability and Accountability Act (HIPPA), Basel II, Sarbanes-Oxley Act (SOX) and others — typically have stringent data security standards. Using a properly-configured MFT system to encrypt, transmit, monitor and store sensitive data empowers organizations to meet security mandates.\r\n<span style=\"font-weight: bold; \">Technology megatrends.</span> Moving files has become more complex with the adoption of transformational technologies. The growth of big data, cloud applications, artificial intelligence, data analytics and the Internet of Things (IoT) place a premium on the speed and bulk of file transfers. MFT offers advanced capabilities and support for multiple platforms, mobile devices, applications and other existing IT infrastructure.\r\n<span style=\"font-weight: bold; \">Visibility.</span> Companies need to anticipate risk factors to mitigate damages. Operational visibility over file movements leads to proactive issue resolution, like failed transfers and improved compliance with SLA commitments.\r\n<h1 class=\"align-center\">Secure File Transfer vs Managed File Transfer software comparison</h1>\r\nMFT is a platform. This may make it seem more advanced than other protocols, and arguably it is. It offers administration capabilities coupled with automation and popular security protocols like HTTPS, SFTP, and FTPS. Often, the interface of MFT is designed for transparency and visibility. Generally, it’s a more secure transfer protocol than most others.\r\nMFT beats secure file transfers in complexity and nuance and crushes the competition when it comes to security. If we had to find some drawbacks to implementing a MFT strategy, it’s complexity may mean a learning curve is required for some users. Also, managed file transfer implies management is required. The introduction of visibility and transparency of the process offers no benefit if the processes aren’t being monitored.\r\nFTP (File Transfer Protocol) and SFT (Secure File Transfer) are both network protocols for “put” and “get” functions. With regards to billing data, data recovery files and other sensitive information that enterprise businesses need to hold and share, SFT offers encryption, whereas FTP does not. SFT was designed for the purposes of securely transmitting data.\r\nTo that point, SFT uses Secure Shell or SSH network protocol to transfer data across a channel. Data is protected as long as it’s in moving across the channel. Once it hits a secured server, it’s no longer protected. For additional encryption, senders would need to ensure encryption occurs in advance of sending.\r\nThe main benefit of Secure File Transfer is it is enhanced with encryption during the sending process, whereas regular FTP does not have such protection. It’s still second to an MFT platform, but SFT could be a less expensive alternative, depending on how much impact data transfer has on your business.\r\n\r\n","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Managed_File_Transfer_Software.png","alias":"managed-file-transfer-software-mft"},"391":{"id":391,"title":"Cloud Testing/ASQ PaaS","description":" With the growing technological advancements, there is a requirement of continuous testing of an application. The conventional mode of application testing is very time consuming, and cost associated with such kind of testing solution is also high. That is why demand for a solution to test application on the cloud and for the cloud is rising significantly as high quality, well-performing software across platforms drive business innovation and competitive positioning. Software vendors across the globe are investing a huge amount of money in research and development of software that can provide more software-centric IT infrastructure to their customer. These software vendors are looking for automated software testing (ASQ), software as a Service (SaaS) and adaptive infrastructure support in the cloud.\r\nCloud testing and ASQ software facilitates quick access to both superiority solution and support infrastructure to sustain complex software sourcing and dynamic development. Cloud testing solutions require fewer resources and less infrastructure investment than on-premise ASQ solutions.\r\nThe continuous development in cloud computing space is driving the growth of the global cloud testing and ASQ software market. Cloud computing is creating a new shift in IT model. Cloud computing facilitates organizations to adopt software as a Service at a very low cost. Software as a Service providers business organization a more agile framework and increase their efficiency, at the same time, software as a service is a complex phenomenon and requires continuous monitoring. As an organization is deploying more enterprise mobility solution and mobile application, cloud testing and ASQ software vendors are seeing a huge opportunity in the market.\r\nHowever, business organizations’ software needs are changing very frequently, and to cope with these rapidly changing software advancements is very difficult for cloud testing ASQ software vendors, and this is the biggest challenge cloud testing and ASQ software market is facing.","materialsDescription":" <span style=\"font-weight: bold; \">What is Cloud testing?</span>\r\nCloud testing is a form of software testing in which web applications use cloud computing environments (a "cloud") to simulate real-world user traffic.\r\nCloud testing uses cloud infrastructure for software testing. Organizations pursuing testing in general and load, performance testing and production service monitoring, in particular, are challenged by several problems like limited test budget, meeting deadlines, high costs per test, a large number of test cases, and little or no reuse of tests and geographical distribution of users add to the challenges. Moreover, ensuring high-quality service delivery and avoiding outages requires testing in one's datacenter, outside the data-center, or both. Cloud Testing is the solution to all these problems. Effective unlimited storage, quick availability of the infrastructure with scalability, flexibility and availability of a distributed testing environment reduce the execution time of testing of large applications and lead to cost-effective solutions.\r\nTraditional approaches to test a software incurs a high cost to simulate user activity from different geographic locations. Testing firewalls and load balancers involve expenditure on hardware, software and its maintenance. In the case of applications where the rate of increase in a number of users is unpredictable or there is variation in deployment environment depending on client requirements, cloud testing is more effective.\r\n<span style=\"font-weight: bold; \">What are the types of testing?</span>\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">Stress</span></span>\r\nStress Test is used to determine the ability of the application to maintain a certain level of effectiveness beyond the breaking point. It is essential for any application to work even under excessive stress and maintain stability. Stress testing assures this by creating peak loads using simulators. But the cost of creating such scenarios is enormous. Instead of investing capital in building on-premises testing environments, cloud testing offers an affordable and scalable alternative.\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">Load</span></span>\r\nLoad testing of an application involves the creation of heavy user traffic and measuring its response. There is also a need to tune the performance of any application to meet certain standards. However, a number of tools are available for that purpose.\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">Performance</span></span>\r\nFinding out thresholds, bottlenecks & limitations is a part of performance testing. For this, testing performance under a particular workload is necessary. By using cloud testing, it is easy to create such an environment and vary the nature of traffic on-demand. This effectively reduces cost and time by simulating thousands of geographically targeted users.\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">Functional</span></span>\r\nFunctional testing of both internet and non-internet applications can be performed using cloud testing. The process of verification against specifications or system requirements is carried out in the cloud instead of on-site software testing.\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">Compatibility</span></span>\r\nUsing a cloud environment, instances of different Operating Systems can be created on demand, making compatibility testing effortless.\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">Browser performance</span></span>\r\nTo verify the application's support for various browser types and performance in each type can be accomplished with ease. Various tools enable automated website testing from the cloud.\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">Latency</span></span>\r\nCloud testing is utilized to measure the latency between the action and the corresponding response for any application after deploying it on the cloud.\r\n<span style=\"font-weight: bold; \">What are the keys to successful testing?</span>\r\n<ol><li>Understanding a platform provider's elasticity model/dynamic configuration method</li><li>Staying abreast of the provider's evolving monitoring services and Service Level Agreements (SLAs)</li><li>Potentially engaging the service provider as an ongoing operations partner if producing commercial off-the-shelf (COTS) software</li><li>Being willing to be used as a case study by the cloud service provider. The latter may lead to cost reductions.</li></ol>\r\nCloud testing is often seen as only performance or load tests, however, as discussed earlier it covers many other types of testing. Cloud computing itself is often referred to as the marriage of software as a service (SaaS) and utility computing. In regard to test execution, the software offered as a service may be a transaction generator and the cloud provider's infrastructure software, or may just be the latter. Distributed Systems and Parallel Systems mainly use this approach for testing, because of their inherent complex nature. D-Cloud is an example of such a software testing environment.\r\nFor testing non-internet applications, virtual instances of testing environment can be quickly set up to do automated testing of the application. The cloud testing service providers provide an essential testing environment as per the requirement of the application under test. The actual testing of applications is performed by the testing team of the organization which owns the application or third-party testing vendors.\r\n<span style=\"font-weight: bold;\">What are the benefits?</span>\r\nThe ability and cost to simulate web traffic for software testing purposes have been an inhibitor to overall web reliability. The low cost and accessibility of the cloud's extremely large computing resources provide the ability to replicate real-world usage of these systems by geographically distributed users, executing wide varieties of user scenarios, at scales previously unattainable in traditional testing environments. Minimal start-up time along with quality assurance can be achieved by cloud testing.\r\nFollowing are some of the key benefits:\r\n<ul><li>Reduction in capital expenditure</li><li>Highly scalable</li></ul>\r\n<span style=\"font-weight: bold;\">What are the issues?</span>\r\nThe initial setup cost for migrating testing to a cloud is very high as it involves modifying some of the test cases to suit the cloud environment. This makes the decision of migration crucial. Therefore, cloud testing is not necessarily the best solution to all testing problems.\r\nLegacy systems & services need to be modified in order to be tested on the cloud. Usage of robust interfaces with these legacy systems may solve this problem. Also like any other cloud services, cloud testing is vulnerable to security issues.\r\nThe test results may not be accurate due to the varying performance of the service providers’ network and the internet. In many cases, service virtualization can be applied to simulate the specific performance and behaviors required for accurate and thorough testing.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Cloud_Testing.png","alias":"cloud-testingasq-paas"},"393":{"id":393,"title":"Embedded Database Management Systems","description":" An embedded database is a database technology in which database management solutions are built into an application rather than provided as standalone tools. In many cases, this effectively "hides" the database management tools from the end user.\r\nAn embedded database system can be set up in many ways. It can include traditional relational database designs or other kinds of storage formats. It can utilize different solutions as well; for example, a popular type of embedded architecture uses MS Access for storage and relies on VBA forms to handle data requests. Many of these systems also use various APIs and SQL tools to perform data-related tasks.\r\nEmbedded database designs are used for various purposes. Embedded database tools, for example, can be used for email archive searches, for presentation of gaming statistics or other stored game data, and for industry-specific tools like tax-preparation software packages.\r\nIT professionals also sometimes use the term embedded database to refer to database solutions that run on mobile devices.","materialsDescription":" <span style=\"font-weight: bold; \">What do "Embedded Database Management Systems" mean?</span>\r\nAn embedded database system is a database management system (DBMS) which is tightly integrated with an application software that requires access to stored data, such that the database system is "hidden" from the application’s end-user and requires little or no ongoing maintenance.\r\n<span style=\"font-weight: bold;\">What does it include?</span>\r\nIt is actually a broad technology category that includes\r\n<ul><li>database systems with differing application programming interfaces (SQL as well as proprietary, native APIs),</li><li>database architectures (client-server and in-process),</li><li>storage modes (on-disk, in-memory, and combined),</li><li>database models (relational, object-oriented, entity–attribute–value model, network/CODASYL),</li><li>target markets.</li></ul>","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Embedded_Database_Management_Systems.png","alias":"embedded-database-management-systems"},"395":{"id":395,"title":"Agile Application Life-Cycle Management Software","description":" Agile software development is an approach to software development under which requirements and solutions evolve through the collaborative effort of self-organizing and cross-functional teams and their customer(s)/end user(s). It advocates adaptive planning, evolutionary development, empirical knowledge, and continual improvement, and it encourages rapid and flexible response to change.\r\nThe term agile (sometimes written Agile) was popularized, in this context, by the Manifesto for Agile Software Development. The values and principles espoused in this manifesto were derived from and underpin a broad range of software development frameworks, including Scrum and Kanban.\r\nThere is significant anecdotal evidence that adopting agile practices and values improves the agility of software professionals, teams and organizations; however, some empirical studies have found no scientific evidence.\r\nAgile application lifecycle management (Agile ALM) is all the tools and processes that are used to manage software development projects based on flexible methodology. The traditional cascade design model uses a phase approach to the development life cycle. This approach means that no project phase starts earlier than the previous one ends. For example, the design does not begin before the collection of requirements ends. Development does not begin until the design is completed. Testing does not begin until development is fully completed. There are many tools to manage what is in the input and output of each phase.","materialsDescription":" Agile ALM brings together two seemingly contradictory development strategies. Agile promotes flexibility, rapid release cycles and quick response to change. Application lifecycle management (ALM) emphasizes tracking and documenting changes in an application -- from inception to retirement. Its processes are more controlled and less adaptive than the Agile methodology. That said, when put together, Agile and ALM act as complements, rending ALM more flexible and Agile more disciplined.\r\n<span style=\"font-weight: bold;\">What is Agile ALM?</span>\r\nDevelopment expert Yvette Francino described Agile ALM as ALM tools and processes that are used to manage Agile software development projects. For example, rather than using Waterfall's phased approach, Agile ALM offers an approach to software development in which design, code and requirements are all handled by the same team.\r\n<span style=\"font-weight: bold;\">How do you integrate Agile into an ALM framework?</span>\r\nAccording to Gerie Owen's article on Agile and ALM, adopting Agile means both a change to the ALM approach and a change to an organization's mind-set. An Agile ALM strategy will focus on the customer and will have the ability to adapt to shifting requirements -- from project planning to release management. For example, instead of just implementing controls to force early feedback from testers and business analysts, an organization would also foster a culture of collaboration.\r\n<span style=\"font-weight: bold;\">Are there tools that can help me achieve this?</span>\r\nALM tools are widely available but must be chosen with care, according to Yvette Francino, SearchSoftwareQuality contributor. Organizations should look for tools that facilitate the process without impeding acceptance of changing requirements. They would also need to integrate throughout the application lifecycle and be easy to maintain. In other words, the tool should manage the development process in an Agile way. In an article for SearchSoftwareQuality.com, Amy Reichert provides a list of Agile ALM tools and identifies their strengths and weaknesses. Rally Software, for example, offers a product that works well with Agile but, according to Reichert, does not provide an intuitive workflow. VersionOne, on the other hand, offers a tool that is more user-friendly but less compatible with Agile. Which one is best will depend on the company's needs.\r\n<span style=\"font-weight: bold;\">Are there challenges to Agile ALM that I should be aware of?</span>\r\nThe primary challenge to Agile ALM is in finding a balance between the two methodologies. A common pitfall is to over-ALM the development process. In other words, when developers and testers start to find workarounds to the software rules -- as they often do -- some react by creating more rules in order to more strictly enforce them. Meanwhile, processes lose their agility.\r\n<span style=\"font-weight: bold;\">How can I overcome these challenges?</span>\r\nTesting expert Amy Reichert cautions development teams to keep track of how many rules they add and how those rules are communicated. She also suggests having a discussion with the team, asking them why they are circumventing the process. Once everyone's role has been clarified, project managers can then decide which rules, if any, to add.\r\n<span style=\"font-weight: bold;\">Is Agile ALM a good approach for mobile development?</span>\r\nMobile development is faster and more competitive than traditional software development. It has newer technologies and higher-speed application cycles. These qualities could make mobile an excellent candidate for Agile ALM, but only if the methodology is amended to accommodate the challenges inherent in a more restrictive development process. In an article on mobile ALM, site editor James Denman suggested an ALM approach that focuses on smaller pieces of software and authenticates results as each part is finished. That way, teams can quickly discern whether the app will effectively serve its purpose or if it needs to be taken in a different direction.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Agile_Application_Life_Cycle_Management_Software.png","alias":"agile-application-life-cycle-management-software"},"401":{"id":401,"title":"Service-Oriented Architecture and Web Services","description":" Service-oriented architecture (SOA) is a style of software design where services are provided to the other components by application components, through a communication protocol over a network. An SOA service is a discrete unit of functionality that can be accessed remotely and acted upon and updated independently, such as retrieving a credit card statement online. SOA is also intended to be independent of vendors, products and technologies.\r\nA service has four properties according to one of many definitions of SOA:\r\n<ul><li>It logically represents a business activity with a specified outcome.</li><li>It is self-contained.</li><li>It is a black box for its consumers, meaning the consumer does not have to be aware of the service's inner workings.</li><li>It may consist of other underlying services.</li></ul>\r\nDifferent services can be used in conjunction to provide the functionality of a large software application,[4] a principle SOA shares with modular programming. Service-oriented architecture integrates distributed, separately maintained and deployed software components. It is enabled by technologies and standards that facilitate components' communication and cooperation over a network, especially over an IP network.\r\nSOA is related to the idea of an application programming interface (API), an interface or communication protocol between different parts of a computer program intended to simplify the implementation and maintenance of software. An API can be thought of as the service, and the SOA the architecture that allows the service to operate.","materialsDescription":" <span style=\"font-weight: bold;\">What is Service-Oriented Architecture?</span>\r\nService-oriented architecture (SOA) is a software architecture style that supports and distributes application components that incorporates discovery, data mapping, security and more. Service-oriented architecture has two main functions:\r\n<ol><li>Create an architectural model that defines goals of applications and methods that will help achieve those goals.</li><li>Define implementations specifications linked through WSDL (Web Services Description Language) and SOAP (Simple Object Access Protocol) specifications.</li></ol>\r\nService-oriented architecture principles are made up of nine main elements:\r\n<ol><li>Standardized Service Contract where services are defined making it easier for client applications to understand the purpose of the service.</li><li>Loose Coupling is a way to interconnecting components within the system or network so that the components can depend on one another to the least extent acceptable. When a service functionality or setting changes there is no downtime or breakage of the application running.</li><li>Service Abstraction hides the logic behind what the application is doing. It only relays to the client application what it is doing, not how it executes the action.</li><li>Service Reusability divides the services with the intent of reusing as much as possible to avoid spending resources on building the same code and configurations.</li><li>Service Autonomy ensures the logic of a task or a request is completed within the code.</li><li>Service Statelessness whereby services do not withhold information from one state to another in the client application.</li><li>Service Discoverability allows services to be discovered via a service registry.</li><li>Service Composability breaks down larger problems into smaller elements, segmenting the service into modules, making it more manageable.</li><li>Service Interoperability governs the use of standards (e.g. XML) to ensure larger usability and compatibility.</li></ol>\r\n<span style=\"font-weight: bold;\">How Does Service-Oriented Architecture Work?</span>\r\nA service-oriented architecture (SOA) works as a components provider of application services to other components over a network. Service-oriented architecture makes it easier for software components to work with each other over multiple networks.\r\nA service-oriented architecture is implemented with web services (based on WSDL and SOAP), to be more accessible over standard internet protocols that are on independent platforms and programming languages.\r\nService-oriented architecture has 3 major objectives all of which focus on parts of the application cycle:\r\n<ol><li>Structure process and software components as services – making it easier for software developers to create applications in a consistent way.</li><li>Provide a way to publish available services (functionality and input/output requirements) – allowing developers to easily incorporate them into applications.</li><li>Control the usage of these services for security purposes – mainly around the components within the architecture, and securing the connections between those components.</li></ol>\r\nMicroservices architecture software is largely an updated implementation of service-oriented architecture (SOA). The software components are created as services to be used via APIs ensuring security and best practices, just as in traditional service-oriented architectures.\r\n<span style=\"font-weight: bold;\">What are the benefits of Service-Oriented Architecture?</span>\r\nThe main benefits of service-oriented architecture solutions are:\r\n<ul><li>Extensibility – easily able to expand or add to it.</li><li>Reusability – opportunity to reuse multi-purpose logic.</li><li>Maintainability – the ability to keep it up to date without having to remake and build the architecture again with the same configurations.</li></ul>","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Service_Oriented_Architecture_and_Web_Services.png","alias":"service-oriented-architecture-and-web-services"},"403":{"id":403,"title":"Software Quality Analysis and Measurement","description":" Software quality measures whether the software satisfies its requirements. Software requirements are classified as either functional or non-functional.\r\nFunctional requirements specify what the software should do. Functional requirements could be calculations, technical details, data manipulation, and processing, or any other specific function that defines what an application is meant to accomplish.\r\nNon-functional requirements specify how the system should work. Also known as “quality attributes” non-functional requirements include things like disaster recovery, portability, privacy, security, supportability, and usability.\r\nNote that most factors indicating software quality fit into the non-functional requirements category. And, while it’s obviously important that software does what it’s built to do, this is the bare minimum you would expect from any application.\r\nBelow are some examples of test metrics and methods for measuring the important aspects of software quality. Efficient measuring and testing of your software for quality is the only way to maximize the chances of releasing high-quality software in today’s fast-paced development environments.\r\nYou can measure reliability by counting the number of high priority bugs found in production. You can also use load testing, which assesses how well the software functions under ordinary conditions of use. It’s important to note that “ordinary conditions of use” can vary between low loads and high loads—the point is that such environments are expected.\r\nLoad testing is also useful for measuring performance efficiency. Stress testing is an important variation on load testing used to determine the maximum operating capacity of an application.\r\nStress testing is conducted by inundating software with requests far exceeding its normal and expected patterns of use to determine how far a system can be pushed before it breaks. With stress testing, you get insight into the recoverability of the software when it breaks—ideally, a system that fails should have a smooth recovery.\r\nYou can measure security by assessing how long it takes to patch or fix software vulnerabilities. You can also check actual security incidents from previous software versions, including whether the system was breached and if any breaches caused downtime for users. All previous security issues should, of course, be addressed in future releases.\r\nCounting the number of lines of code is a simple measure of maintainability—software with more lines of code is harder to maintain, meaning changes are more likely to lead to errors.\r\nThere are several detailed test metrics used to check the complexity of code, such as cyclomatic complexity, which counts the amount of linearly independent paths through a program’s source code.\r\nYou can check the rate of delivery by counting the number of software releases. Another measure is the number of “stories” or user requirements shipped to the user.\r\nYou can test the GUI to make sure it’s simple and not frustrating for end-users. The problem is that GUI testing is complex and time-consuming – there are many possible GUI operations and sequences that require testing in most software. And that means it takes a long time to design test cases.\r\nThe complexity of GUI testing competes with the objective of releasing software quickly, which has necessitated the implementation of automated testing. Several test suites that completely simulate user behavior are available.","materialsDescription":" <span style=\"font-weight: bold;\">What is Software Quality Metrics?</span>\r\nThe word 'metrics' refers to standards for measurements. Software Quality Metrics means a measurement of attributes, pertaining to software quality along with its process of development.\r\nThe term "software quality metrics" illustrate the picture of measuring the software qualities by recording the number of defects or security loopholes present in the software. However, quality measurement is not restricted to the counting defects or vulnerabilities but also covers other aspects of qualities such as maintainability, reliability, integrity, usability, customer satisfaction, etc.\r\n<span style=\"font-weight: bold;\">Why Software Quality Metrics?</span>\r\n<ol><li>To define and categorize elements in order to have a better understanding of each and every process and attribute.</li><li>To evaluate and assess each of these processes and attribute against the given requirements and specifications.</li><li>Predicting and planning the next move w.r.t software and business requirements.</li><li>Improving the Overall quality of the process and product, and subsequently of project.</li></ol>\r\n<span style=\"font-weight: bold;\">Software Quality Metrics: a sub-category of Software Metrics</span>\r\nIt is basically, a subclass of software metrics that mainly emphasizes on quality assets of the software product, process and project. A software metric is a broader concept that incorporates software quality metrics in it, and mainly consists of three types of metrics:\r\n<ul><li><span style=\"font-weight: bold;\">Product Metrics:</span> it includes size, design, complexity, performance and other parameters that are associated with the product's quality.</li><li><span style=\"font-weight: bold;\">Process Metrics:</span> it involves parameters like time-duration in locating and removing defects, response time for resolving issues, etc.</li><li><span style=\"font-weight: bold;\">Project Metrics:</span> it may include a number of teams, developers involved, cost and duration for the project, etc.</li></ul>\r\n<span style=\"font-weight: bold;\">Features of good Software Quality Metrics:</span>\r\n<ul><li>Should be specific to measure the particular attribute or an attribute of greater importance.</li><li>Comprehensive for a wide variety of scenarios.</li><li>Should not consider attributes that have already been measured by some other metric.</li><li>Reliable to work similarly in all conditions.</li><li>Should be easy and simple to understand and operate.</li></ul>","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Software_Quality_Analysis_and_Measurement.png","alias":"software-quality-analysis-and-measurement"},"405":{"id":405,"title":"Mobile Enterprise Application Platforms","description":"<span style=\"font-weight: bold; \">A mobile enterprise application platform (MEAP)</span> is a development environment that provides tools and middleware to develop, test, deploy and manage corporate software running on mobile devices.\r\nEnterprise mobile application development addresses the difficulties of developing mobile software by managing the diversity of devices, networks and user groups at the time of deployment and throughout the mobile computing technology lifecycle. Unlike standalone apps, an enterprise mobile app development services a comprehensive, long-term approach to deploying mobility. Cross-platform considerations are one big driver behind using MEAPs. For example, a company can use an enterprise mobile app development to develop the mobile application once and deploy it to a variety of mobile devices (including smart phones, tablets, notebooks and ruggedized handhelds) with no changes to the underlying business logic.\r\nPlatform applications are best for companies that wish to deploy multiple applications on a single infrastructure, scaled to the size of their current mobile field force and available in an online and offline mode. Enterprise mobile app platform provides higher level languages and easy development templates to simplify and speed the mobile application development timeframe, requiring less programming knowledge for mobile business application deployment.\r\nThere are many advantages associated with enterprise mobile application development platform. First of all, it can be run on the cloud. Without maintaining separate sets of code, mobile enterprise application platforms can support multiple types of operating systems and mobile devices. This means a company can deploy a mobile application to different mobile devices with the help of mobile enterprise application platforms without having to worry about compatibility. As most enterprise mobile development platforms have a tool set for modifications, creation of custom app extensions is quite easy and convenient. Enterprise mobile application platforms can centrally manage mobile applications and can also help in integration with multiple server data sources.","materialsDescription":"<h1 class=\"align-center\">What are the benefits of enterprise mobile app platform? </h1>\r\n<ul><li>Create apps and complex forms for any type of mobile device and OS without having to maintain separate sets of code.</li><li>Create tailor-made apps for specific user groups, giving them exactly what they need; usually, a mash-up of reading/writing access to your backend systems, publicly available web services and device features such as camera, GPS, sign-on screen, etc.</li><li>Requires basic and limited coding skills e.g. HTML and CSS.</li><li>Allows a high degree of re-use of the code and interactions developed.</li><li>Provide the offline capability for mobile users in areas without WiFi or cellular coverage.</li><li>Once the platform is integrated into the important back-end systems, creating new apps and forms can be done in hours rather than weeks or months.</li><li>Enterprise mobile application development services can be run on the cloud and purchased on a subscription basis.</li></ul>\r\n<h1 class=\"align-center\">Pros and cons of MEAP</h1>\r\nAlong with the benefits described above, a mobile enterprise application platform extends beyond fourth-generation language (4GL) tools for app development to use a graphical environment and dedicated script language. The tool makes business apps accessible to users from any location at any time. For ease of IT management, some MEAP products can run as a cloud service.\r\nA MEAP, like any technology, comes with challenges. The initial investment is high - it's expensive to begin with, though the total cost of ownership (TCO) goes down with use over time - and it requires IT to perform additional tasks such as updating content, securing data, maintaining applications with updates and managing user authentication.\r\n<h1 class=\"align-center\">Important features</h1>\r\nIn general, a MEAP has two important features:\r\n<ul><li>A mobile application development environment and back-end web services to manage those mobile applications and link them to enterprise applications and databases.</li><li>A centralized management component that enables an administrator to control which users can access an application and what enterprise databases that application can pull data from.</li></ul>\r\nSometimes, organizations will use a mobile enterprise application platform in conjunction with enterprise mobility management (EMM) or mobile device management (MDM). MDM manages mobile devices, while MEAP products manage the enterprise applications running on those devices - although there is sometimes overlap between the functionalities of these two technologies.\r\n<br /><br /> ","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Mobile_Enterprise_Application_Platforms.png","alias":"mobile-enterprise-application-platforms"},"435":{"id":435,"title":"Rack Server","description":"A rack mount server is a great way to maximize your shelf space by packing a lot of servers into a small space. Rackmount servers are typically easier for administrators to manage due to proximity, modularity and better cable management. Lockable rack cabinet doors and steel server front panels provide an additional level of physical security. Additionally, rack unit designed servers are better able to keep the server components cool than traditional tower form factor. Industry standard 19-inch racks will allow you to easily expand your business without taking up more valuable floor space.\r\nThere is a lot of thought that needs to go into which size rack server is best bet for your project. Both current requirements and future expansion plans need to be taken into account to ensure your server remains capable in the future.\r\nBoth large and small projects can be built on the 1U server platform. "U" stands for unit, “unit”, and this means thickness: server rack 1U = 1.75 inches or 44 mm wide. A reasonable amount of storage can fit within a 1U, processing power has no limits, and some models even allow up to two PCI-Express cards. Modern computer hardware uses much less power than it ever has in the past, which means less heat generation. Some 1U servers to still produce some acoustic noise, but is nowhere near the level of needing earmuffs like the old days. The only reason to go up in size is for additional expansion options.\r\n2U models allow for multiple "low-profile" PCI-Express cards while keeping a compact form factor and also providing some additional storage space. If the plan is to use multiple full height cards, then 3U or 4U servers should be the focus. The 4U models are very popular and offer flexible options. The 3U models do have limitations on expansion card compatibility and are really only for situations where rack space needs to be absolutely optimized (14x3U servers or 10x4U servers can fit in a 42u rack).","materialsDescription":"<span style=\"font-weight: bold;\">What is a ‘rack unit’?</span>\r\nA rack unit is the designated unit of measurement used when describing or quantifying the vertical space you have available in any equipment rack. One unit is equal to 1.75 inches, or 4.45 centimeters. Any equipment that has the ability to be mounted onto a rack is generally designed in a standard size to fit into many different server rack heights. It’s actually been standardized by the Electronic Industries Alliance (EIA). The most common heights are between 8U to 50U, but customization is also a viable option if you’re working with nonstandard sizes.\r\n<span style=\"font-weight: bold;\">Are there any specific ventilation requirements with server racks?</span>\r\nOver 65% of IT equipment failures are directly attributed to inadequate, poorly maintained, or failed air conditioning in the server room. So yes, proper ventilation is a critical part of maintaining any data center. Some cabinet manufacturers construct side panel ventilation instead of front and back ventilation, but experts say it’s inadequate for rack mount servers. This can be especially dangerous if more than one cabinet is being set up at once. The importance of proper ventilation should not be taken lightly, and you should always opt for front to back ventilation except in network applications where the IT equipment exhausts out the side.\r\n<span style=\"font-weight: bold;\">What is meant by ‘server rack depth’?</span>\r\nServer rack depth is a critical aspect of the ventilation process. Connectworld.net says, “Server cabinet depth is important not only because it has to allow room for the depth of the particular equipment to be rack-mounted (deep servers vs. routers or switches), but also it has to allow sufficient room for cables, PDU’s as well as airflow.<br /><br />","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Rack_Server.png","alias":"rack-server"},"441":{"id":441,"title":"Pen Tablets","description":" A graphics tablet (also known as a digitizer, drawing tablet, drawing pad, digital drawing tablet, pen tablet, or digital art board) is a computer input device that enables a user to hand-draw images, animations and graphics, with a special pen-like stylus, similar to the way a person draws images with a pencil and paper. These tablets may also be used to capture data or handwritten signatures. It can also be used to trace an image from a piece of paper which is taped or otherwise secured to the tablet surface. Capturing data in this way, by tracing or entering the corners of linear poly-lines or shapes, is called digitizing.\r\nThe device consists of a flat surface upon which the user may "draw" or trace an image using the attached stylus, a pen-like drawing apparatus. The image is displayed on the computer monitor, though some graphic tablets now also incorporate an LCD screen for a more realistic or natural experience and usability.\r\nSome tablets are intended as a replacement for the computer mouse as the primary pointing and navigation device for desktop computers.\r\nGraphic tablets, because of their stylus-based interface and ability to detect some or all of pressure, tilt, and other attributes of the stylus and its interaction with the tablet, are widely considered to offer a very natural way to create computer graphics, especially two-dimensional computer graphics. Indeed, many graphic packages can make use of the pressure (and, sometimes, stylus tilt or rotation) information generated by a tablet, by modifying the brush size, shape, opacity, color, or other attributes based on data received from the graphic tablet.\r\nIn East Asia, graphic tablets, known as "pen tablets", are widely used in conjunction with input-method editor software (IMEs) to write Chinese, Japanese, and Korean characters (CJK). The technology is popular and inexpensive and offers a method for interacting with the computer in a more natural way than typing on the keyboard, with the pen tablet supplanting the role of the computer mouse. Uptake of handwriting recognition among users who use alphabetic scripts has been slower.\r\nGraphic tablets are commonly used in the artistic world. Using a pen-like stylus on a graphic tablet combined with a graphics-editing program, such as Illustrator or Photoshop by Adobe Systems, or CorelDraw, gives artists a lot of precision when creating digital drawings or artwork. Photographers can also find working with a graphic tablet during their post processing can really speed up tasks like creating a detailed layer mask or dodging and burning.\r\nEducators make use of tablets in classrooms to project handwritten notes or lessons and to allow students to do the same, as well as providing feedback on student work submitted electronically. Online teachers may also use a tablet for marking student work, or for live tutorials or lessons, especially where complex visual information or mathematical equations are required. Students are also increasingly using them as note-taking devices, especially during university lectures while following along with the lecturer.\r\nTablets are also popular for technical drawings and CAD, as one can typically put a piece of paper on them without interfering with their function.\r\nFinally, tablets are gaining popularity as a replacement for the computer mouse as a pointing device. They can feel more intuitive to some users than a mouse, as the position of a pen on a tablet typically corresponds to the location of the pointer on the GUI shown on the computer screen. Those artists using a pen for graphic work will as a matter of convenience use a tablet and pen for standard computer operations rather than put down the pen and find a mouse. A popular game osu! allows utilizing a tablet as a way of playing.\r\nGraphic tablets are available in various sizes and price ranges; A6-sized tablets being relatively inexpensive and A3-sized tablets far more expensive. Modern tablets usually connect to the computer via a USB or HDMI interface. ","materialsDescription":" <span style=\"font-weight: bold;\">What is a pen tablet?</span>\r\nAlso called a drawing tablet or a pen tablet, a graphics tablet is a natural input device that converts information from a handheld stylus. The user uses the stylus like a pen, pencil, or paintbrush, pressing its tip on the tablet surface. The device can also be used in the replacement of a computer mouse.\r\n<span style=\"font-weight: bold;\">Who uses graphics tablets?</span>\r\n<ul><li>Architects and Engineers;</li><li>Artists;</li><li>Cartoonist;</li><li>Fashion designers;</li><li>Graphic designers;</li><li>Illustrators;</li><li>Photographers;</li><li>Teachers.</li></ul>","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Pen_Tablets.png","alias":"pen-tablets"},"443":{"id":443,"title":"Application Delivery Controller (load balancer) - appliance","description":" Application Delivery Controllers are the next generation of load balancers, and are typically located between the firewall/router and the web server farm. An application delivery controller is a network device that helps sites direct user traffic to remove excess load from two or more servers. In addition to providing Layer 4 load balancing, ADCs can manage Layer 7 for content switching, and also provide SSL offload and acceleration. They tend to offer more advanced features such as content redirection as well as server health monitoring. An Application delivery controller may also be known as a Web switch, URL switch, Web content switch, content switch and Layer 7 switch.\r\nToday, advanced application delivery controllers and intelligent load balancers are not only affordable, but the consolidation of Layer 4-7 load balancing and content switching, and server offload capabilities such as SSL, data caching and compression provides companies with cost-effective out-of-the-box infrastructure.\r\nFor enterprise organizations (companies with 1,000 or more employees), integrating best-of-breed network infrastructure is commonplace. However best-of-breed does not equate with deploying networks with enterprise-specific features and expensive products, but rather, deploying products that are purpose-built, with the explicit features, performance, reliability and scalability created specifically for the companies of all sizes.\r\nIn general, businesses of all sizes are inclined to purchase “big brand” products. However, smaller vendors that offer products within the same category can provide the optimal performance, features and reliability required, with the same benefits - at a lower cost.\r\nFor the enterprise market, best-of-breed comes with a high Total Cost of Ownership (TCO), since deploying products from various manufacturers requires additional training, maintenance and support. Kemp can help SMBs lower their TCO, and help them build reliable, high performance and scalable web and application infrastructure. Kemp products have a high price/performance value for SMBs. Our products are purpose-built for SMB businesses for dramatically less than the price of “big name” ADC and SLB vendors who are developing features that enterprise customers might use.","materialsDescription":" <span style=\"font-weight: bold;\">What are application delivery controllers?</span>\r\nApplication Delivery Controllers (ADCs) are the next stage in the development of server load balancing solutions. ADCs allow you to perform not only the tasks of balancing user requests between servers, but also incorporate mechanisms that increase the performance, security and resiliency of applications, as well as ensure their scalability.\r\n<span style=\"font-weight: bold;\">And what other possibilities do application controllers have?</span>\r\nIn addition to the function of uniform distribution of user requests, application delivery controllers have many other interesting features. They can provide around-the-clock availability of services, improve web application performance up to five times, reduce risks when launching new services, protect confidential data, and publish internal applications to the outside with secure external access (a potential replacement for outgoing Microsoft TMG).\r\nOne of the most important functions of application delivery controllers, which distinguish them from simple load balancers, is the presence of a functional capable of processing information issued to the user based on certain rules.\r\n<span style=\"font-weight: bold;\">What are the prerequisites for implementing application delivery controllers in a particular organization?</span>\r\nA number of factors can determine the criteria for deciding whether to implement application controllers in your organization. First, this is the poor performance of web services, which is a long download of content, frequent hangs and crashes. Secondly, such a prerequisite can be interruptions in the work of services and communication channels, expressed in failures in the transmitting and receiving equipment that ensures the operation of the data transmission network, as well as failures in the operation of servers.\r\nIn addition, it is worth thinking about implementing application delivery controllers if you use Microsoft TMG or Cisco ACE products, since they are no longer supported by the manufacturer. A prerequisite for the implementation of ADC may be the launch of new large web projects, since this process will inevitably entail the need to ensure the operability of this web project with the maintenance of high fault tolerance and performance.\r\nAlso, controllers are needed when you need to provide fault tolerance, continuous availability and high speed of applications that are consolidated in the data center. A similar situation arises when it is necessary to build a backup data center: here you also need to ensure fault tolerance between several data centers located in different cities.\r\n<span style=\"font-weight: bold;\">What are the prospects for the introduction of application controllers in Russia and in the world?</span>\r\nGartner's research shows that there have recently been marked changes in the market for products that offer load balancing mechanisms. In this segment, user demand shifts from servers implementing a simple load balancing mechanism to devices offering richer functionality.\r\nGartner: “The era of load balancing has long gone, and companies need to focus on products that offer richer application delivery functionality.”\r\nIn Russia, due to the specifics of the internal IT market, application controllers are implemented mainly because of the presence of some specific functionality, and not because of the comprehensive solution for delivering applications in general, which this product offers. The main task for which application delivery controllers are now most often sold is the same load balancing function as before.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Application_Delivery_Controller_load_balancer_appliance.png","alias":"application-delivery-controller-load-balancer-appliance"},"445":{"id":445,"title":"Penetration Testing","description":" A <span style=\"font-weight: bold; \">penetration test</span>, colloquially known as a pen test, <span style=\"font-weight: bold; \">pentest </span>or <span style=\"font-weight: bold; \">ethical hacking</span>, is an authorized simulated cyberattack on a computer system, performed to evaluate the security of the system.\r\nStandard penetration test is performed to identify both weaknesses (also referred to as <span style=\"font-weight: bold; \">vulnerabilities</span>), including the potential for unauthorized parties to gain access to the system's features and data, as well as strengths, enabling a full risk assessment to be completed. \r\nThe main objective of system penetration testing is to identify security weaknesses. Vulnerability testing can also be used to test an organization's security policy, its adherence to compliance requirements, its employees' security awareness and the organization's ability to identify and respond to security incidents.\r\nTypically,<span style=\"font-size:11pt; font-family:Arial; font-style:normal; \">professional penetration testing</span>provides information about security weaknesses that are identified or exploited through pen testing is aggregated and provided to the organization's IT and network system managers, enabling them to make strategic decisions and prioritize remediation efforts. \r\nA wide variety of <span style=\"font-weight: bold; \">software security testing tools </span>are available to assist with penetration testing, including free-of-charge, free software, and commercial software. Penetration tools scan code in order to identity malicious code in applications that could result in a security breach. Pen testing tools examine data encryption techniques and can identify hard-coded values, such as usernames and passwords, to verify security vulnerabilities in the system.\r\n Important aspect of any penetration testing program is defining the scope within which the pen testers must operate. Usually, the scope defines what systems, locations, techniques and tools can be used in a penetration test. Limiting the scope of the penetration test helps focus team members - and defenders - on the systems over which the organization has control.\r\n<p class=\"align-center\"><span style=\"font-weight: bold;\">Here are several of the main vulnerability penetration testing approaches:</span></p>\r\n<ul><li><span style=\"font-weight: bold;\">Targeted testing</span> is performed by the organization's IT team and the penetration testing team working together. It's sometimes referred to as a "lights turned on" approach because everyone can see the test being carried out.</li><li><span style=\"font-weight: bold;\">External testing</span> targets a company's externally visible servers or devices including domain name servers, email servers, web servers or firewalls. The<span style=\"font-size:11pt; font-family:Arial; font-style:normal; \">objective of penetration testing</span>is to find out if an outside attacker can get in and how far they can get in once they've gained access.<span style=\"font-weight: bold;\"></span></li><li><span style=\"font-weight: bold;\">Internal testing</span> mimics an inside attack behind the firewall by an authorized user with standard access privileges. This kind of test is useful for estimating how much damage a disgruntled employee could cause.<span style=\"font-weight: bold;\"></span></li><li><span style=\"font-weight: bold;\">Blind testing simulates</span> the actions and procedures of a real attacker by severely limiting the information given to the person or team performing the test beforehand. Typically, the pen testers may only be given the name of the company.<span style=\"font-weight: bold;\"></span></li><li><span style=\"font-weight: bold;\">Double-blind testing</span> takes the blind test and carries it a step further. In this type of pen test, only one or two people within the organization might be aware a test is being conducted. Double-blind tests can be useful for testing an organization's security monitoring and incident identification as well as its response procedures.<span style=\"font-weight: bold;\"></span></li><li><span style=\"font-weight: bold;\">Black box</span> testing is basically the same as blind testing, but the tester receives no information before the test takes place. Rather, the pen testers must find their own way into the system.<span style=\"font-weight: bold;\"></span></li><li><span style=\"font-weight: bold;\">White box</span> testing provides the penetration testers information about the target network before they start their work. This information can include such details as IP addresses, network infrastructure schematics and the protocols used plus the source code.</li></ul>","materialsDescription":"<h1 class=\"align-center\"> <span style=\"font-weight: normal;\">What Is Penetration Testing?</span></h1>\r\nThere is a considerable amount of confusion in the industry regarding the differences between vulnerability assessment and penetration testing tool,as the two phrases are commonly interchanged. However, their meaning and implications are very different. A <span style=\"font-weight: bold; \">vulnerability assessment </span>simply identifies and reports noted vulnerabilities, whereas a pentest attempts to exploit the vulnerabilities to determine whether unauthorized access or other malicious activity is possible.<span style=\"font-weight: bold; \"> Penetration testing</span> typically includes network penetration testing and web application security testing as well as controls and processes around the networks and applications, and should occur from both outside the network trying to come in (external testing) and from inside the network.\r\n<h1 class=\"align-center\"><span style=\"font-weight: normal;\">What is a pentesting tool ?</span></h1>\r\n<p class=\"align-left\">Penetration tools are used as part testing to automate certain tasks, improve testing efficiency and discover issues that might be difficult to find using manual analysis techniques alone. Two common penetration testing tools are <span style=\"font-weight: bold; \">static analysis </span>tools and <span style=\"font-weight: bold; \">dynamic analysis</span> tools. Tools for attack include software designed to produce <span style=\"font-weight: bold; \">brute-force attacks</span> or <span style=\"font-weight: bold; \">SQL injections</span>. There is also hardware specifically designed for pen testing, such as small inconspicuous boxes that can be plugged into a computer on the network to provide the hacker with remote access to that network. In addition, an ethical hacker may use social engineering techniques to find vulnerabilities. For example, sending phishing emails to company employees, or even disguising themselves as delivery people to gain physical access to the building.</p>\r\n<h1 class=\"align-center\"><span style=\"font-weight: normal;\">What are the benefits of penetration testing?</span></h1>\r\n<ul><li><span style=\"font-weight: bold;\">Manage the Risk Properly. </span>For many organizations, one of the most popular benefits of pen testing services is that they will give you a baseline to work upon to cure the risk in a structured and optimal way. It will show you the list of vulnerabilities in the target environment and the risks associated with it.<span style=\"font-weight: bold;\"></span></li><li><span style=\"font-weight: bold;\">Increase Business Continuity.</span> Business continuity is the prime concern for any successful organization. A break in the business continuity can happen for many reasons. Lack of security loopholes is one of them. Insecure systems suffer more breaches in their availability than the secured ones. Today attackers are hired by other organizations to stop the continuity of business by exploiting the vulnerabilities to gain the access and to produce a denial of service condition which usually crashes the vulnerable service and breaks the server availability.<span style=\"font-weight: bold;\"></span></li><li><span style=\"font-weight: bold;\">Protect Clients, Partners, and Third Parties.</span> A security breach can affect not only the target organization but also their associated clients, partners and third parties working with it. However, if company schedules a penetration test regularly and takes necessary actions towards security, it will help professionals build trust and confidence in the organization.<span style=\"font-weight: bold;\"></span></li><li><span style=\"font-weight: bold;\">Helps to Evaluate Security Investment. </span> The pen test results will give us an independent view of the effectiveness of existing security processes, ensuring that configuration management practices have been followed correctly. This is an ideal opportunity to review the efficiency of the current security investment. What needs to be improved and what is working and what is not working and how much investment needed to build the more secure environment in the organization.<span style=\"font-weight: bold;\"></span></li><li><span style=\"font-weight: bold;\">Help Protect Public Relationships and Guard the reputation of your company.</span>A good public relationship and company reputation are built up after taking many years struggle and hard work and with a huge amount of investment. This can be suddenly changed due to a single security breach.<span style=\"font-weight: bold;\"></span></li><li><span style=\"font-weight: bold;\">Protection from Financial Damage.</span> A simple breach of the security system may cause millions of dollars of damage. Penetration testing can protect your organization from such damages.<span style=\"font-weight: bold;\"></span></li><li><span style=\"font-weight: bold;\">Helps to tests cyber-defense capability.</span> During a penetration test, the target company’s security team should be able to detect multiple attacks and respond accordingly on time. Furthermore, if an intrusion is detected, the security and forensic teams should start investigations, and the penetration testers should be blocked and their tools removed. The effectiveness of your protection devices like IDS, IPS or WAF can also be tested during a penetration test.<span style=\"font-weight: bold;\"></span></li><li><span style=\"font-weight: bold;\">Client-side Attacks. </span>Pen tests are an effective way of ensuring that successful highly targeted client-side attacks against key members of your staff. Security should be treated with a holistic approach. Companies only assessing the security of their servers run the risk of being targeted with client-side attacks exploiting vulnerabilities in software like web browsers, pdf readers, etc. It is important to ensure that the patch management processes are working properly updating the operating system and third-party applications.</li></ul>","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Penetration_Testing.png","alias":"penetration-testing"},"447":{"id":447,"title":"Corporate Telephony","description":" Corporate telephony is the complexes and networks within corporate communications, usually created for geographically distributed enterprises, providing communication, single address space and a single service. Converging telephony is actively used when building the telephone network.\r\nWhen creating corporate telephony, the following tasks are solved:\r\n<ul><li>Improving the security of information transmitted over communication channels</li><li>Reducing telephone costs</li><li>Simplified operation</li><li>Improving communication reliability</li><li>Improving communication quality</li></ul>\r\nConvergent telephony involves the use of both traditional and IP-telephony. IP-telephony has a high-quality voice transmission, which confirms a very large number of successfully installed telephony systems around the world.","materialsDescription":" <span style=\"font-weight: bold; \">What is corporate IP telephony software?</span>\r\nThe corporate telephony market is evolving from a focus on innovation in proprietary hardware to use of commodity hardware and standards-based software. While most telephony solutions are Internet Protocol (IP)-enabled or IP-PBX solutions, the associated endpoints are a mix of time division multiplexing (TDM) and IP. Corporate telephony platforms focus on high-availability, scalable solutions, which support Session Initiation Protocol (SIP), desktop and soft phone functionality, and the ability to integrate with enterprise IT applications while delivering toll-grade voice quality.\r\n<span style=\"font-weight: bold; \">What is a Call Center?</span>\r\nCall Center (Call Center) is a set of specialized automatic call distribution software that provides efficient routing and optimal selection of resources that increase the productivity of operators and the contact center as a whole. In a call center, a client can get online help, place an order, leave a message, etc.\r\nThe composition:\r\n<ul><li>The telephone platform + automatic call router (ACD)</li><li>The camera room</li><li>The interactive Voice Response System</li><li>The reporting system</li><li>Multimedia client interaction systems</li><li>Recording Systems (Nice, Verint)</li><li>The fax Server (Smartphone)</li><li>The management software</li></ul>\r\nBenefits from the implementation of call centers:\r\n<ul><li>Operational processing of a large number of incoming calls with a minimum number of operators</li><li>Improving the quality of customer service by operators</li><li>Control over the activities of operators</li><li>Automating the process of providing standard background information.</li></ul>\r\n<span style=\"font-weight: bold;\">What are DECT Microcellular Communication Systems?</span>\r\nRecently, more and more users have preferred the use of microcellular DECT-systems for organizing wireless communications in an enterprise, office or institution. The standard is based on digital radio transmission of data between radio base stations and radiotelephones using time division multiple access technologies.\r\nThis technology provides the most efficient use of the radio frequency spectrum, has high noise immunity and low transmitter radiation.\r\nThe introduction of DECT-systems will give the company a number of advantages:\r\n<ul><li>Increased mobility and accessibility of staff for communication interaction.</li><li>Ensuring a high degree of protection of telephone conversations (inability to listen).</li><li>Providing high-quality corporate communications.</li><li>No need for permission to use frequencies.</li><li>Ability to organize an autonomous telephone network in the office. there are no restrictions on the distance to locate base stations within the corporate network (including in any remote branches, warehouses, etc., as a result of which the need to install separate PBX systems in remote offices).</li><li>Reducing the cost of building a corporate IT infrastructure by reducing the total number of cables.</li></ul>\r\nThis type of communication is safe for the health of employees. DECT uses the frequency range of 1880-1900 MHz and has an extremely low radiation power - 10 MW.\r\nThen choosing a DECT microcellular system, there are the following limitations:\r\n<ul><li>The maximum number of base stations in the system (it is important if you need stable radio communication in large areas).</li><li>The maximum number of mobile DECT handsets in the system (it is important when planning the organization of a communication system in a company based primarily on the wireless principle).</li><li>The number of simultaneous calls supported by each base station (important when using a large number of mobile subscribers in a limited space).</li></ul>\r\nA qualified team of the System Project company will implement for your organization projects of telephony deployment based on DECT-systems in an enterprise, office or institution.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Corporate_Telephony.png","alias":"corporate-telephony"},"451":{"id":451,"title":"Printers and All-in-Ones","description":" An MFP (multi-function product/printer/peripheral), multi-functional, all-in-one (AIO), or multi-function device (MFD), is an office machine which incorporates the functionality of multiple devices in one, so as to have a smaller footprint in a home or small business setting (the SOHO market segment), or to provide centralized document management/distribution/production in a large-office setting. A typical MFP may act as a combination of some or all of the following devices: email, fax, photocopier, printer, scanner.\r\nMFP manufacturers traditionally divided MFPs into various segments. The segments roughly divided the MFPs according to their speed in pages-per-minute (ppm) and duty-cycle/robustness. However, many manufacturers are beginning to avoid the segment definition for their products, as speed and basic functionality alone do not always differentiate the many features that the devices include. Two-color MFPs of a similar speed may end in the same segment, despite having potentially very different feature-sets, and therefore very different prices. From a marketing perspective, the manufacturer of the more expensive MFP would want to differentiate their product as much as possible to justify the price difference, and therefore avoids the segment definition.\r\nMany MFP types, regardless of the category they fall into, also come in a "printer only" variety, which is the same model without the scanner unit included. This can even occur with devices where the scanner unit physically appears highly integrated into the product.\r\nAs of 2013, almost all printer manufacturers offer multifunction printers. They are designed for home, small business, enterprise, and commercial use. Naturally, the cost, usability, robustness, throughput, output quality, etc. all vary with the various use cases. However, they all generally do the same functions; Print, Scan, Fax, and Photocopy. In the commercial/enterprise area, most MFP has used laser-printer technology, while the personal, SOHO environments, utilize inkjet methods. Typically, inkjet printers have struggled with delivering the performance and color-saturation demanded by enterprise/large business use. However, HP has recently launched a business-grade MFP using inkjet technology.\r\nIn any case, instead of rigidly defined segments based on speed, more general definitions based on the intended target audience and capabilities are becoming much more common as of 2013. While the sector lacks formal definitions, it is commonly agreed amongst MFP manufacturers that the products fall roughly into the following categories: all-in-one, SOHO MFP, office MFP, production printing MFP.","materialsDescription":" <span style=\"font-weight: bold; \">What is a multifunction printer?</span>\r\nA multifunction printer (MFP) is a device that consolidates the functionality of a printer, copier, scanner and/or fax into one machine. Multifunction printers are a common choice for budget-minded businesses that want to consolidate assets, reduce costs and improve workflow. As you move to more digital workflows, take a look at our list of multifunction printers specifically recommended for scanning documents.\r\n<span style=\"font-weight: bold; \">Multifunction Printer Evaluation Considerations</span>\r\nTo make an informed decision about what multifunction printer is right for you, you need to ask the right questions. Here are the 10 things you must know before you buy a multifunction printer.\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">1. Know your requirements.</span></span>\r\nUnderstand what you need the multifunction printer to do for you and your end users. Beyond printing and copying, how do you want to use the multifunction printer to help manage documents, reduce paper, simplify workflow, scan to the cloud, work remotely, etc.? How many copy, print, fax, scan and email jobs will you run each day? How many users will share the device? Will you need it to be color capable? Wireless? Mobile- and cloud-connected? There are a number of requirements to consider.\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">2. Know the total cost of ownership (TCO) and cost/value benefits.</span></span>\r\nWhen evaluating a multifunction printer, beware of looking only at the cost of the initial hardware. There are a number of other factors to consider, including the cost of supplies. Once ink costs are taken into consideration, inkjet multifunction printers, initially perceived as being low-cost, often turn out to have an equivalent or higher TCO than the better-performing laser multifunction printers. TCO can also increase significantly for devices that are hard to use and maintain, unreliable, or lack the features and capability to efficiently and effectively produce the results you need.\r\nYour multifunction printer can become a useful asset in managing and controlling costs for printing and imaging, and can also add new capabilities to your organization if you choose wisely. Consider how multifunction printers can address total cost of ownership for printing and imaging assets, better consolidate and improve management of resources across the organization, and improve business process efficiency.\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">3. Know what third parties have to say.</span></span>\r\nCompare the data on the actual performance and management and support issues promoted on the vendor's specification sheets with data from independent testing agencies. What are experienced people in the industry saying about the quality and performance of the product you are considering?\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">4. Know how easy it is to connect to an existing network.</span></span>\r\nConsider how easily the multifunction printer system will integrate with your existing network. Is it easy to deploy? Does it require minimal start-up training? Does it come with software or wizards to guide you through installation, troubleshooting and upgrading?\r\nIf your workgroup needs to print from multiple, distributed devices (smartphones, tablets, laptops, etc.) to one easily accessible location, then consider buying a wireless, or WiFi, multifunction printer. WiFi multifunction printers connect to a network without needing to be hard-wired or cabled into that network. This enables easy mobile printing, without unsightly cords to trip over.\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">5. Know how easy it is to use.</span></span>\r\nPrevent bottlenecks and costly employee downtime by finding a multifunction printer that's easy to operate. Check for intuitive user interfaces, minimal training requirements, and easily accessible online help and documentation. If you do need support, check that the product is backed by manufacturer-provided service and support coverage.\r\nTablet-like interfaces make the newest-generation multifunction printers especially easy to use. They let you touch, swipe, pinch and scroll just like you would on a smartphone or tablet. And with apps integrated into the interface, you can add, delete or swap tools for your own customized workflows.\r\nMobile- and cloud-connected multifunction printers make it easy to work from just about anywhere. On these MFPs, apps become your shortcut for downloading, sharing, printing, scanning, distributing -- even translating -- documents on the go.\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">6. Know its multitasking abilities.</span></span>\r\nLook for a multifunction printer that can truly multifunction. Can users access each function they need, even if other functions are already in use? Be aware that some products, such as All-in-Ones (AiOs), offer multiple functions all in one device, but may not multitask simultaneously. If they cannot deliver all the functions of a multifunction printer concurrently, then you may risk downtime due to bottlenecks.\r\n<span style=\"font-style: italic;\"><span style=\"font-weight: bold;\">7. Know its bi-directional communication capabilities.</span></span>\r\nA failure to communicate timely and accurate information to users and IT administrators on the status of jobs, queues, and devices will result in more intervention by you and your staff to solve, prevent or anticipate problems. Solid bi-directional communication, both at the multifunction printer and across the network is essential to keeping a product running consistently. Look for print job and device status capabilities from the desktop and the ability to view all job queues at the device and across the network.\r\n<span style=\"font-style: italic;\"><span style=\"font-weight: bold;\">8. Know the available device management, remote intelligence and support.</span></span>\r\nConsider the vendor's commitment to providing robust device and fleet management tools and utilities. This is something you may want now or in the future. Look for device relationship management software that optimizes the multifunction printer’s availability and uptime. Does the vendor provide superior response time and provide consistent quality of service? You want to trust your multifunction printer will stay up and running to ensure you have an efficient and productive office.\r\nThe more sophisticated multifunction printers let you not only manage fleets, but also information. With the right tools built in, such as integration with Managed Print Services, your MFP becomes the hub of your document environment. It can automate business processes, optimize device management remotely, and assist your help desk with built-in tools. An app-connected interface opens a gateway of future possibilities for device, and document, management as well.\r\n<span style=\"font-weight: bold;\"><span style=\"font-style: italic;\">9. Know whether it provides the level of security and confidentiality you need.</span></span>\r\nDoes the device offer the appropriate level of security for your business? Is it scalable to provide more security if your needs change?\r\nLeft unchecked, multifunction printers can be vulnerable entry points for data breaches or malicious attacks. The best way to keep your data secure is to choose multifunction printers that exceed industry standards for intrusion prevention, device detection and data encryption. Also look for multifunction printer manufacturers who partner with information technology security experts, such as McAfee and Cisco.\r\n<span style=\"font-style: italic;\"><span style=\"font-weight: bold;\">10. Know what software and solutions are available.</span></span>\r\nUnderstand what compatible software and solutions are available from the vendor, as well as their solution partners.\r\nMultifunction printers can help you streamline duplicate and cumbersome document processes and electronically organize, edit and archive your paper documents. With a multifunction printer and a simple software application, you can turn paper documents into electronic formats and send them to multiple destinations - email, cloud-based document repositories, network folders, remote printers, back-office automation systems, etc. - with a single scan.\r\nApp-enabled multifunction printers take these processes a step further. They put functions and workflows into an easy, app interface like what you see on your mobile phone or tablet. They let you print from or scan to the cloud, and connect smartphones to WiFi MFPs so you can work from anywhere, anytime.\r\nOnce you're armed with the knowledge you've gathered by asking these questions, you'll be prepared to make the right decision for your business.\r\n\r\nMultifunction printers can help you streamline duplicate and cumbersome document processes and electronically organize, edit and archive your paper documents. With an multifunction printer and a simple software application you can turn paper documents into electronic format and send to multiple destinations - email, document repositories, network folders, even remote printers - with a single scan.\r\nOnce you're armed with the knowledge you've gathered by asking these questions, you'll be prepared to make the right decision for your business.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Printers_and_All_in_Ones.png","alias":"printers-and-all-in-ones"},"453":{"id":453,"title":"Desktop PC","description":" A desktop computer is a personal computer designed for regular use at a single location on or near a desk or table due to its size and power requirements. The most common configuration has a case that houses the power supply, motherboard (a printed circuit board with a microprocessor as the central processing unit (CPU), memory, bus, and other electronic components), disk storage (usually one or more hard disk drives, solid state drives, optical disc drives, and in early models a floppy disk drive); a keyboard and mouse for input; and a computer monitor, speakers, and, often, a printer for output. The case may be oriented horizontally or vertically and placed either underneath, beside, or on top of a desk.\r\nDesktop computers are designed to work at the table. Usually, they are bigger and more powerful than other types of personal computers. Desktop computers are made up of individual components. The main component is called the system unit - usually, it is a rectangular case that is on or under the table. Other components, such as the monitor, mouse and keyboard, are connected to the system unit.\r\nAs a rule, all additional external devices are connected to the PC system unit using special connectors. Their main part is located on its rear wall. Some, the most popular ones, are brought to the front, for example, USB connectors and audio outputs. The system unit itself consists of internal devices, called components.","materialsDescription":" Main components of the desktop system unit:\r\n<ul><li><span style=\"font-weight: bold;\">A CPU</span> is the main information processing and computer control device.</li><li><span style=\"font-weight: bold;\">A video card</span> is a device for processing two-dimensional and three-dimensional graphics, as well as displaying an image on a monitor (screen).</li><li><span style=\"font-weight: bold;\">RAM</span> - used for short-term storage of data during operation of the computer. When it is turned off, the information recorded in the RAM disappears.</li><li><span style=\"font-weight: bold;\">A storage device (hard disk)</span> - used as the primary means for storing all user data and programs. Its capacity is much more than the amount of RAM, however, the speed of reading and writing information is less than when working with RAM.</li><li><span style=\"font-weight: bold;\">A motherboard</span> is a complex device that combines all the components of a personal computer and ensures their well-coordinated work.</li><li><span style=\"font-weight: bold;\">An optical drive</span> - a device for reading and writing information on optical CDs, DVDs and Blue-ray discs.</li><li><span style=\"font-weight: bold;\">A case</span> - protects all components from harmful external influences (for example, moisture) and gives an aesthetic look to your computer.</li><li><span style=\"font-weight: bold;\">A power supply unit</span> converts the alternating current of ordinary electric networks of high voltage (220 Volt) into direct current of low voltage (12 V, 5 V and 3 V), required for powering computer components.</li></ul>","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Desktop_PC.png","alias":"desktop-pc"},"455":{"id":455,"title":"Portable PC","description":" A portable computer is a computer designed to be easily moved from one place to another and included a display and keyboard. The first commercially sold portable was the 50 pound IBM 5100, introduced 1975. The next major portables were Osborne's 24 pound CP/M-based Osborne 1 (1981) and Compaq's 28 pound 100% IBM PC compatible Compaq Portable (1983). These "luggable" computers lacked the next technological development, not requiring an external power source; that feature was introduced by the laptop. Laptops were followed by lighter models, so that in the 2000's mobile devices and by 2007 smartphones made the term almost meaningless. The 2010's introduced wearable computers such as smartwatches.\r\nPortable computers, by their nature, are generally microcomputers. Larger portable computers were commonly known as 'Lunchbox' or 'Luggable' computers. They are also called 'Portable Workstations' or 'Portable PCs'. In Japan they were often called 'Bentocom'. (ベントコン, Bentokon) from "bento".\r\nPortable computers, more narrowly defined, are distinct from desktop replacement computers in that they usually were constructed from full-specification desktop components, and often do not incorporate features associated with laptops or mobile devices. A portable computer in this usage, versus a laptop or other mobile computing device, have a standard motherboard or backplane providing plug-in slots for add-in cards. This allows mission specific cards such as test, A/D, or communication protocol (IEEE-488, 1553) to be installed. Portable computers also provide for more disk storage by using standard disk drives and provide for multiple drives.\r\nPortable computers have been increasing in popularity over the past decade, as they do not restrict the user's mobility as a desktop computer does, and do not restrict the computer power and storage available as a laptop computer does. Wireless access to the Internet, extended battery life, and more elaborate cases permitting multiple screens and even significant RAID capacity, have contributed.","materialsDescription":"<span style=\"font-weight: bold; \">What does Portable Computer mean?</span>\r\nA Portable computer is a computer that comes with a keyboard and display and one which can be easily relocated or transported, although less convenient compared to a notebook.\r\nThey have lower specifications and are not well suited for full-time usage as they are less ergonomic. However, they take less space than desktop computers and come with most features found on a desktop. \r\n<span style=\"font-weight: bold; \">What are the advantages of portable PC?</span>\r\nAdvantages of a portable computer:\r\n<ul><li>Compared to other mobile computing device or laptop, portable computer makes use of standard motherboards and also provide plug in slots for add in cards.</li><li>Portability and flexibility to use is a definite advantage for portable computer over desktop computers.</li><li>Portable computers use less space than desktop computers and are smaller in size.</li><li>Compared to a desktop computer, the power consumed is less in case of portable computer and can help in power and cost savings.</li><li>Compared to desktop computers, immediacy is more pronounced in the case of portable computers.</li></ul>\r\n<span style=\"font-weight: bold;\">What are the disadvantages of portable PC?</span>\r\nDisadvantages of a portable computer:\r\n<ul><li>They have a lower specification than most desktop systems.</li><li>They are less ergonomic and are less suited for full-time usage in most of the cases.</li><li>Expansion is tough and any repair could prove costly.</li><li>Most of portable computers are not upgradeable.</li><li>Compared to desktop systems, they are less reliable mostly due to overheating problems and often run slower.</li></ul>","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Portrable_PC.png","alias":"portable-pc"},"459":{"id":459,"title":"Monitor","description":" A computer monitor is an output device that displays information in pictorial form. A monitor usually comprises the display device, circuitry, casing, and power supply. The display device in modern monitors is typically a thin film transistor liquid crystal display (TFT-LCD) with LED backlighting having replaced cold-cathode fluorescent lamp (CCFL) backlighting. Older monitors used a cathode ray tube (CRT). Monitors are connected to the computer via VGA, Digital Visual Interface (DVI), HDMI, DisplayPort, Thunderbolt, low-voltage differential signaling (LVDS) or other proprietary connectors and signals.\r\nOriginally, computer monitors were used for data processing while television sets were used for entertainment. From the 1980s onwards, computers (and their monitors) have been used for both data processing and entertainment, while televisions have implemented some computer functionality. The common aspect ratio of televisions, and computer monitors, has changed from 4:3 to 16:10, to 16:9.\r\nModern computer monitors are easily interchangeable with conventional television sets. However, as computer monitors do not necessarily include integrated speakers, it may not be possible to use a computer monitor without external components.","materialsDescription":" <span style=\"font-weight: bold; \">What is an LCD monitor (TFT)?</span>\r\nLiquid crystal monitor (also Liquid crystal display, LCD monitor, flat indicator) - a flat monitor based on liquid crystals.\r\nLCD TFT is one of the names of the liquid crystal display, which uses an active matrix controlled by thin-film transistors. The TFT amplifier for each subpixel is used to increase the speed, contrast and clarity of the display image.\r\n<span style=\"font-weight: bold; \">How does an LCD monitor work?</span>\r\nEach pixel of the LCD display consists of a layer of molecules between two transparent electrodes, and two polarizing filters, the polarization planes of which (as a rule) are perpendicular. In the absence of liquid crystals, the light transmitted by the first filter is almost completely blocked by the second.\r\nThe surface of the electrodes in contact with liquid crystals is specially processed for the initial orientation of the molecules in one direction. In the TN matrix, these directions are mutually perpendicular, therefore, the molecules line up in a helical structure in the absence of voltage. This structure refracts the light in such a way that, before the second filter, the plane of its polarization rotates, and light passes through it already without loss. Except for the absorption by the first filter of half of the unpolarized light, the cell can be considered transparent. If voltage is applied to the electrodes, the molecules tend to line up in the direction of the field, which distorts the helical structure. In this case, the elastic forces counteract this, and when the voltage is turned off, the molecules return to their original position. With a sufficient field value, almost all molecules become parallel, which leads to the opacity of the structure. By varying the voltage, you can control the degree of transparency. If a constant voltage is applied for a long time, the liquid crystal structure may degrade due to ion migration. To solve this problem, an alternating current is applied, or a change in the field polarity at each addressing of the cell (the opacity of the structure does not depend on the field polarity). In the entire matrix, each of the cells can be controlled individually, but with an increase in their number this becomes difficult to accomplish, as the number of required electrodes increases. Therefore, row and column addressing is used almost everywhere. The light passing through the cells can be natural - reflected from the substrate (in LCD displays without backlight). But more often an artificial light source is used, in addition to independence from external lighting, this also stabilizes the properties of the resulting image. Thus, a full-fledged LCD monitor consists of electronics that process the input video signal, LCD matrix, backlight module, power supply and housing. It is the combination of these components that determines the properties of the monitor as a whole, although some characteristics are more important than others.\r\n<span style=\"font-weight: bold;\">What are the most important features of LCD monitors?</span>\r\n<ul><li><span style=\"font-style: italic;\">Resolution:</span> The horizontal and vertical sizes, expressed in pixels. Unlike CRT monitors, LCDs have one, “native”, physical resolution, the rest is achieved by interpolation.</li><li><span style=\"font-style: italic;\">Point Size:</span> The distance between the centers of adjacent pixels. Directly related to the physical resolution.</li><li><span style=\"font-style: italic;\">Aspect ratio:</span> The ratio of width to height, for example: 5: 4, 4: 3, 5: 3, 8: 5, 16: 9, 16:10.</li><li><span style=\"font-style: italic;\">Visible diagonal:</span> the size of the panel itself, measured diagonally. The display area also depends on the format: a monitor with a 4: 3 format has a larger area than with a 16: 9 format with the same diagonal.</li><li><span style=\"font-style: italic;\">Contrast:</span> the ratio of the brightness of the lightest and darkest points. Some monitors use an adaptive backlight level; the contrast figure given for them does not apply to image contrast.</li><li><span style=\"font-style: italic;\">Brightness:</span> The amount of light emitted by the display is usually measured in candelas per square meter.</li><li><span style=\"font-style: italic;\">Response Time:</span> The minimum time a pixel needs to change its brightness. The measurement methods are ambiguous.</li><li><span style=\"font-style: italic;\">Viewing angle:</span> the angle at which the contrast drop reaches the set one is considered different for different types of matrices and by different manufacturers, and often can not be compared.</li><li><span style=\"font-style: italic;\">Matrix type:</span> LCD technology.</li><li><span style=\"font-style: italic;\">Inputs:</span> (e.g. DVI, D-Sub, HDMI, etc.).</li></ul>\r\n<span style=\"font-weight: bold;\">What are the technologies for LCD monitors?</span>\r\nLCD monitors were developed in 1963 at the David Sarnoff Research Center at RCA, Princeton, New Jersey.\r\nThe main technologies in the manufacture of LCD displays: TN + film, IPS and MVA. These technologies differ in the geometry of the surfaces, the polymer, the control plate, and the front electrode. Of great importance are the purity and type of polymer with the properties of liquid crystals, used in specific developments.\r\nThe response time of LCD monitors designed using SXRD technology (English Silicon X-tal Reflective Display - silicon reflective liquid crystal matrix) is reduced to 5 ms. Sony, Sharp, and Philips have jointly developed PALC technology (Plasma Addressed Liquid Crystal - Plasma Control of Liquid Crystals), which combines the advantages of LCD (brightness and color richness, contrast) and plasma panels (large viewing angles, H, and vertical, V, high refresh rate). These displays use gas-discharge plasma cells as a brightness controller, and an LCD matrix is used for color filtering. PALC technology allows you to address each pixel of the display individually, which means unsurpassed controllability and image quality.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Monitor.png","alias":"monitor"},"475":{"id":475,"title":"Network Management - Hardware","description":" Your business is much more than just a machine that dispenses products or services in exchange for money. It’s akin to a living and breathing thing. Just as with the human body, in business, all the parts are interconnected and work together to move things forward.\r\nIf a company’s management is the brain, then its employees are the muscles. Muscles don’t work without the oxygen carried to them by the blood. Blood doesn’t pump through the body without the heart and circulatory system.\r\nData moves through your network like blood through veins, delivering vital information to employees who need it to do their jobs. In a business sense, the digital network is the heart and circulatory system. Without a properly functioning network, the entire business collapses. That’s why keeping networks healthy is vitally important. Just as keeping the heart healthy is critical to living a healthy life, a healthy network is a key to a thriving business. It starts with network management.\r\nNetwork management is hardware with a broad range of functions including activities, methods, procedures and the use of tools to administrate, operate, and reliably maintain computer network systems.\r\nStrictly speaking, network Management does not include terminal equipment (PCs, workstations, printers, etc.). Rather, it concerns the reliability, efficiency and capacity/capabilities of data transfer channels.","materialsDescription":" <span style=\"font-weight: bold;\">What Is Network Management?</span>\r\nNetwork management refers to the processes, tools, and applications used to administer, operate and maintain network infrastructure. Performance management and fault analysis also fall into the category of network management. To put it simply, network management is the process of keeping your network healthy, which keeps your business healthy.\r\n<span style=\"font-weight: bold;\">What Are the Components of Network Management?</span>\r\nThe definition of network management is often broad, as network management involves several different components. Here are some of the terms you’ll often hear when network management or network management software is talked about:\r\n<ul><li>Network administration</li><li>Network maintenance</li><li>Network operation</li><li>Network provisioning</li><li>Network security</li></ul>\r\n<span style=\"font-weight: bold;\">Why Is Network Management so Important When It Comes to Network Infrastructure?</span>\r\nThe whole point of network management is to keep the network infrastructure running smoothly and efficiently. Network management helps you:\r\n<ul><li><span style=\"font-style: italic;\">Avoid costly network disruptions.</span> Network downtime can be very costly. In fact, industry research shows the cost can be up to $5,600 per minute or more than $300K per hour. Network disruptions take more than just a financial toll. They also have a negative impact on customer relationships. Slow and unresponsive corporate networks make it harder for employees to serve customers. And customers who feel underserved could be quick to leave.</li><li><span style=\"font-style: italic;\">Improve IT productivity.</span> By monitoring every aspect of the network, an effective network management system does many jobs at once. This frees up IT staff to focus on other things.</li><li><span style=\"font-style: italic;\">Improve network security.</span> With a focus on network management, it’s easy to identify and respond to threats before they propagate and impact end-users. Network management also aims to ensure regulatory and compliance requirements are met.</li><li><span style=\"font-style: italic;\">Gain a holistic view of network performance.</span> Network management gives you a complete view of how your network is performing. It enables you to identify issues and fix them quickly.</li></ul>\r\n<span style=\"font-weight: bold;\">What Are the Challenges of Maintaining Effective Network Management and Network Infrastructure?</span>\r\nNetwork infrastructures can be complex. Because of that complexity, maintaining effective network management is difficult. Advances in technology and the cloud have increased user expectations for faster network speeds and network availability. On top of that, security threats are becoming ever more advanced, varied and numerous. And if you have a large network, it incorporates several devices, systems, and tools that all need to work together seamlessly. As your network scales and your company grows, new potential points of failure are introduced. Increased costs also come into play.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Network_Management_Hardware__1_.png","alias":"network-management-hardware"},"477":{"id":477,"title":"Cabinet","description":" An electrical enclosure is a cabinet for electrical or electronic equipment to mount switches, knobs and displays and to prevent electrical shock to equipment users and protect the contents from the environment. The enclosure is the only part of the equipment which is seen by users. It may be designed not only for its utilitarian requirements, but also to be pleasing to the eye. Regulations may dictate the features and performance of enclosures for electrical equipment in hazardous areas, such as petrochemical plants or coal mines. Electronic packaging may place many demands on an enclosure for heat dissipation, radio frequency interference and electrostatic discharge protection, as well as functional, esthetic and commercial constraints.\r\nIn the United States, the National Electrical Manufacturers Association (NEMA) publishes NEMA enclosure type standards for the performance of various classes of electrical enclosures. The NEMA standards cover corrosion resistance, ability to protect from rain and submersion, etc.\r\nFor IEC member countries, standard IEC 60529 classifies the ingress protection rating (IP Codes) of enclosures.\r\nElectrical enclosures are usually made from rigid plastics, or metals such as steel, stainless steel, or aluminum. Steel cabinets may be painted or galvanized. Mass-produced equipment will generally have a customized enclosure, but standardized enclosures are made for custom-built or small production runs of equipment. For plastic enclosures ABS is used for indoor applications not in harsh environments. Polycarbonate, glass-reinforced, and fiberglass boxes are used where stronger cabinets are required, and may additionally have a gasket to exclude dust and moisture.\r\nMetal cabinets may meet the conductivity requirements for electrical safety bonding and shielding of enclosed equipment from electromagnetic interference. Non-metallic enclosures may require additional installation steps to ensure metallic conduit systems are properly bonded.\r\n<span style=\"font-style: italic;\"><span style=\"font-weight: bold;\">Stainless steel and carbon steel</span></span>\r\nCarbon steel and stainless steel are both used for enclosure construction due to their high durability and corrosion resistance. These materials are also moisture resistant and chemical resistant. They are the strongest of the construction options.\r\nStainless steel enclosures are suited for medical, pharma, and food industry applications since they are bacterial and fungal resistant due to their non-porous quality. Stainless steel enclosures may be specified to permit wash-down cleaning in, for example, food manufacturing areas.\r\n<span style=\"font-style: italic;\"><span style=\"font-weight: bold;\">Aluminum</span></span>\r\nAluminum is chosen because of its light weight, relative strength, low cost, and corrosion resistance. It performs well in harsh environments and it is sturdy, capable of withstanding high impact with a high malleable strength. Aluminum also acts as a shield against electromagnetic interference.\r\n<span style=\"font-style: italic;\"><span style=\"font-weight: bold;\">Polycarbonate</span></span>\r\nPolycarbonate used for electrical enclosures is strong but light, non-conductive and non-magnetic. It is also resistant to corrosion and some acidic environments; however, it is sensitive to abrasive cleaners. Polycarbonate is the easiest material to modify.\r\n<span style=\"font-style: italic;\"><span style=\"font-weight: bold;\">Fiberglass</span></span>\r\nFiberglass enclosures resist chemicals in corrosive applications. The material can be used over all indoor and outdoor temperature ranges. Fiberglass can be installed in environments that are constantly wet.","materialsDescription":" <span style=\"font-weight: bold; \">What is a 19-inch Rack Cabinet?</span>\r\nA 19-inch rack cabinet is a standardized size frame or enclosure for mounting equipment. Each piece of equipment has a front panel that is 19 inches wide. To determine if your rack cabinet is a 19тА│ rack, measure the hole to hole spacing and it will measure 18.31 inches.\r\n<span style=\"font-weight: bold; \">What are the types of rack cabinets available?</span>\r\nThe most common types are:\r\n<ul><li>Rack cabinets or Server Racks</li><li>Open Frame racks - 4post racks</li><li>Relay racks - 2Post racks</li><li>Portable rack cabinets</li><li>Wall Mount enclosures</li></ul>\r\n<span style=\"font-weight: bold; \">What is a Rack Unit (U or RU)?</span>\r\nThe Rack Unit is a unit of measurement used for defining the vertical space available in an equipment rack cabinet. A 'U' equals 1.75 inches or4.45cm. Rack-mountable equipment is usually designed to occupy an integer number of U. This dimension has been standardized by the Electronic Industries Alliance (EIA).\r\n<span style=\"font-weight: bold; \">What are some common or standard heights for rack cabinets?</span>\r\nServer racks come in a wide variety of heights anywhere from 1U to 50U and above.\r\n<span style=\"font-weight: bold; \">When are Four- post racks used?</span>\r\nFour-post racks allow for mounting rails to support equipment at the front and rear. These racks may be open in construction or enclosed by front and/or rear doors, side panels, or tops. Four-post racks can provide both robust support and security.\r\n<span style=\"font-weight: bold; \">When are Two- post racks used?</span>\r\nTwo-post racks provide just two vertical posts. Equipment can be mounted either via its front panel holes, or close to its center of gravity, depending on the design of the rack. Two-post racks are most often used for telecommunication installations.\r\n<span style=\"font-weight: bold; \">What are the Applicable Standards for rack cabinets/enclosures design and manufacturing?</span>\r\n<ul><li>The EIA-310. It is standardizing features like the Rack Unit, vertical & horizontal hole spacing, rack cabinet openings and front panel width.</li><li>IEC Standards. IEC 60297 (IEC 60297-3-100, -101, -102, -104, -105 and IEC 60297-5) standardize the dimensions and the mechanical structure of the 19 inch rack cabinets.</li></ul>\r\n<span style=\"font-weight: bold;\">What is the significance of the Rack Cabinet Depth?</span>\r\nRack cabinet depth is important not only because it has to allow room for the depth of the particular equipment to be rack-mounted (deep servers vs. routers or switches), but also it has to allow sufficient room for cables as well as airflow indispensable in cooling rack cabinets and enclosures.\r\n<span style=\"font-weight: bold;\">What rack cabinet options for front and rear doors are available?</span>\r\nFront as well as Rear doors are available in many different materials, sizes and with various ventilation options. Locking systems are also available on most doors. Choosing a ventilated front and rear door is key in air circulation by creating a front to back flow pattern within the rack cabinet.\r\n<span style=\"font-weight: bold;\">What Side Panel options are available for rack cabinets and enclosures?</span>\r\nThe typical types of rack cabinet side panels are: solid removable, solid fixed and louvered removable and fixed.\r\n<span style=\"font-weight: bold;\">What is the most economical rack cabinet cooling technique?</span>\r\nOur approach to efficient cooling of datacenter rack cabinets begins with a sealed separation between the cold isle in front of the equipment row and the hot isle in the back of the equipment. The sealing between hot and cold is done at the front rail level of the enclosure. We implemented this solution, for instance, for a wide variety of Cisco routers: 2821, 7613, 3560E, 3845, 2911, 6509E, 4507R, 4510R, 4510E, 4900M.\r\nSince the router is designed to intake cold air from the side, which is now left by our separation in the hot isle, we needed to create a duct that would connect the router's intake to the cold isle in front of the cabinet. We accomplished the whole task by designing a cold air intake plenum mounted under the router, and connecting this plenum through a duct to the intake on the side of the router. ","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Cabinet.png","alias":"cabinet"},"501":{"id":501,"title":"All-flash and Hybrid Storage","description":" Costs have come down making hybrid and all-flash enterprise storage solutions the preferred choice for storing, processing and moving the massive volumes of business data generated in today’s cloud, mobile and IoT environment.\r\nll-flash storage arrays utilize solid-state drives (SSDs) to deliver high-performance and low-latency workloads using data compression and deduplication technologies. Hybrid Storage combines those same solid-state drives (SSDs) with SAS or NL-SAS drives to offer a more cost-effective storage solution that balances cost with superior performance and high storage density.\r\nBoth options lower the complexity of providing scale-out performance at ultralow latency for data-intensive loads and big data analytics.\r\nWhether you are building a new storage array or refreshing your existing storage infrastructure we will work with you to plan, source, install and configure a storage solution to meet you budgetary and business requirements.","materialsDescription":" <span style=\"font-weight: bold;\">What is flash storage and what is it used for?</span>\r\nFlash storage is any storage repository that uses flash memory. Flash memory comes in many form factors, and you probably use flash storage every day. From a single Flash chip on a simple circuit board attached to your computing device via USB to circuit boards in your phone or MP3 player, to a fully integrated “Enterprise Flash Disk” where lots of chips are attached to a circuit board in a form factor that can be used in place of a spinning disk.\r\n<span style=\"font-weight: bold;\">What is flash storage SSD?</span>\r\nA “Solid State Disk” or EFD “Enterprise Flash Disk” is a fully integrated circuit board where many Flash chips are engineered to represent a single Flash disk. Primarily used to replace a traditional spinning disk, SSDs are used in MP3 players, laptops, servers and enterprise storage systems.\r\n<span style=\"font-weight: bold;\">What is the difference between flash storage and SSD?</span>\r\nFlash storage is a reference to any device that can function as a storage repository. Flash storage can be a simple USB device or a fully integrated All-Flash Storage Array. SSD, “Solid State Disk” is an integrated device designed to replace spinning media, commonly used in enterprise storage arrays.\r\n<span style=\"font-weight: bold;\">What is the difference between flash storage and traditional hard drives?</span>\r\nA traditional hard drive leveraged rotating platters and heads to read data from a magnetic device, comparable to a traditional record player; while flash storage leveraged electronic media or flash memory, to vastly improve performance. Flash eliminates rotational delay and seeks time, functions that add latency to traditional storage media.\r\n<span style=\"font-weight: bold;\">What is the difference between an all-flash array and a hybrid array?</span>\r\nA Hybrid Storage Array uses a combination of spinning disk drives and Flash SSD. Along with the right software, a Hybrid Array can be configured to improve overall performance while reducing cost. An All-Flash-Array is designed to support only SSD media.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Al_flash_and_Hybrid_Storage.png","alias":"all-flash-and-hybrid-storage"},"503":{"id":503,"title":"Storage Networking","description":" A storage area network (SAN) or storage network is a computer network which provides access to consolidated, block-level data storage. SANs are primarily used to enhance accessibility of storage devices, such as disk arrays and tape libraries, to servers so that the devices appear to the operating system as locally-attached devices. A SAN typically is a dedicated network of storage devices not accessible through the local area network (LAN) by other devices, thereby preventing interference of LAN traffic in data transfer.\r\nThe cost and complexity of SANs dropped in the early 2000s to levels allowing wider adoption across both enterprise and small to medium-sized business environments.\r\nA SAN does not provide file abstraction, only block-level operations. However, file systems built on top of SANs do provide file-level access, and are known as shared-disk file systems.\r\nStorage area networks (SANs) are sometimes referred to as network behind the servers and historically developed out of the centralised data storage model, but with its own data network. A SAN is, at its simplest, a dedicated network for data storage. In addition to storing data, SANs allow for the automatic backup of data, and the monitoring of the storage as well as the backup process. A SAN is a combination of hardware and software. It grew out of data-centric mainframe architectures, where clients in a network can connect to several servers that store different types of data. To scale storage capacities as the volumes of data grew, direct-attached storage (DAS) was developed, where disk arrays or just a bunch of disks (JBODs) were attached to servers. In this architecture storage devices can be added to increase storage capacity. However, the server through which the storage devices are accessed is a single point of failure, and a large part of the LAN network bandwidth is used for accessing, storing and backing up data. To solve the single point of failure issue, a direct-attached shared storage architecture was implemented, where several servers could access the same storage device.\r\nDAS was the first network storage system and is still widely implemented where data storage requirements are not very high. Out of it developed the network-attached storage (NAS) architecture, where one or more dedicated file server or storage devices are made available in a LAN. Therefore, the transfer of data, particularly for backup, still takes place over the existing LAN. If more than a terabyte of data was stored at any one time, LAN bandwidth became a bottleneck. Therefore, SANs were developed, where a dedicated storage network was attached to the LAN, and terabytes of data are transferred over a dedicated high speed and bandwidth network. Within the storage network, storage devices are interconnected. Transfer of data between storage devices, such as for backup, happens behind the servers and is meant to be transparent. While in a NAS architecture data is transferred using the TCP and IP protocols over Ethernet, distinct protocols were developed for SANs, such as Fibre Channel, iSCSI, Infiniband. Therefore, SANs often have their own network and storage devices, which have to be bought, installed, and configured. This makes SANs inherently more expensive than NAS architectures.","materialsDescription":"<span style=\"font-weight: bold; \">What is storage virtualization?</span>\r\nA storage area network (SAN) is a dedicated high-speed network or subnetwork that interconnects and presents shared pools of storage devices to multiple servers.\r\nA SAN moves storage resources off the common user network and reorganizes them into an independent, high-performance network. This enables each server to access shared storage as if it were a drive directly attached to the server. When a host wants to access a storage device on the SAN, it sends out a block-based access request for the storage device.\r\nA storage area network is typically assembled using three principle components: cabling, host bus adapters (HBAs), and switches attached to storage arrays and servers. Each switch and storage system on the SAN must be interconnected, and the physical interconnections must support bandwidth levels that can adequately handle peak data activities. IT administrators manage storage area networks centrally.\r\nStorage arrays were initially all hard disk drive systems, but are increasingly populated with flash solid-state drives (SSDs).\r\n<span style=\"font-weight: bold; \">What storage area networks are used for?</span>\r\nFibre Channel (FC) SANs have the reputation of being expensive, complex and difficult to manage. Ethernet-based iSCSI has reduced these challenges by encapsulating SCSI commands into IP packets that don't require an FC connection.\r\nThe emergence of iSCSI means that instead of learning, building and managing two networks -- an Ethernet local area network (LAN) for user communication and an FC SAN for storage -- an organization can use its existing knowledge and infrastructure for both LANs and SANs. This is an especially useful approach in small and midsize businesses that may not have the funds or expertise to support a Fibre Channel SAN.\r\nOrganizations use SANs for distributed applications that need fast local network performance. SANs improve the availability of applications through multiple data paths. They can also improve application performance because they enable IT administrators to offload storage functions and segregate networks.\r\nAdditionally, SANs help increase the effectiveness and use of storage because they enable administrators to consolidate resources and deliver tiered storage. SANs also improve data protection and security. Finally, SANs can span multiple sites, which helps companies with their business continuity strategies.\r\n<span style=\"font-weight: bold;\">Types of network protocols</span>\r\nMost storage networks use the SCSI protocol for communication between servers and disk drive devices.[citation needed] A mapping layer to other protocols is used to form a network:\r\n<ul><li>ATA over Ethernet (AoE), mapping of ATA over Ethernet</li><li>Fibre Channel Protocol (FCP), the most prominent one, is a mapping of SCSI over Fibre Channel</li><li>Fibre Channel over Ethernet (FCoE)</li><li>ESCON over Fibre Channel (FICON), used by mainframe computers</li><li>HyperSCSI, mapping of SCSI over Ethernet</li><li>iFCP or SANoIP mapping of FCP over IP</li><li>iSCSI, mapping of SCSI over TCP/IP</li><li>iSCSI Extensions for RDMA (iSER), mapping of iSCSI over InfiniBand</li><li>Network block device, mapping device node requests on UNIX-like systems over stream sockets like TCP/IP</li><li>SCSI RDMA Protocol (SRP), another SCSI implementation for RDMA transports</li></ul>\r\nStorage networks may also be built using SAS and SATA technologies. SAS evolved from SCSI direct-attached storage. SATA evolved from IDE direct-attached storage. SAS and SATA devices can be networked using SAS Expanders.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Storage_Networking.png","alias":"storage-networking"},"505":{"id":505,"title":"Entry Level Storage","description":" Ready-made entry-level storage systems are often used in various solutions for the SMB segment: disk space consolidation, virtualization, various cluster solutions requiring shared block access.\r\nKey architecture features of most entry-level storage systems on the market:\r\n<ul><li>Use one or two hot-swap controllers that use disk sharing. The controller means a specialized server in a special form factor that provides storage operation (working with disks, servicing arrays and providing volumes to hosts, etc.).</li><li>The presence of two controllers provides an increase in the overall reliability of storage - the ability to avoid downtime during the planned shutdown or failure of one of the controllers) and additional scaling of performance when distributing volumes across different controllers. When using the write cache, its integrity is protected: power protection (regular battery or ionistors plus flash memory reset) and duplication of content between controllers.</li><li>The choice of host interfaces: 16 and 8 Gb FiberChannel, 1 and 10 Gb Ethernet (iSCSI, some models may have FCoE support), SAS. For some models, there are combined options, for example, FC + SAS.</li><li>It is possible to connect additional disk shelves (simple cases with SAS expanders) through the SAS interface. To increase the reliability of the connection, a 2-way connection can be applied (below is an example of one of the possible connection schemes).</li></ul>","materialsDescription":" <span style=\"font-weight: bold;\">What Is Entry-Level Storage?</span>\r\nEntry-level flash storage is simple, smart, secure, affordable, high-performance data storage for enterprises to start small and grow with seamless cloud connectivity as business requirements increase.\r\nOrganizations large and small are navigating at a rapid pace of change in a data-driven economy. Delivering data simply, quickly, and cost-effectively is essential to driving business growth, and the hybrid cloud has emerged as the most efficient way to meet changing business needs. Every IT organization is trying to determine how to modernize with hybrid cloud, and all-flash storage systems are critical on-premises to speed up enterprise applications. However, small enterprises have continued to use hard disk storage systems because of the high cost of all-flash solutions.\r\nAn entry-level storage system offers compact, dense, cost-effective, and easy-to-use storage. These storage systems can be deployed in small offices, small enterprises, and remote locations to run both file and block workloads effectively and efficiently. A simple storage system should support multiple protocols, including FC, NFS, SMB/CIFS, iSCSI, and FCoE, to help customers consolidate multiple applications onto a single simple system. It must be easy to install and deploy, secure and provide flexibility to connect to the cloud.\r\nEntry-level flash storage systems help accelerate all applications, consolidate workloads with better user experience, more effective storage and offer the best value to the customer.\r\n<span style=\"font-weight: bold;\">What Are the Benefits of Entry-Level Storage?</span>\r\n The benefits of entry-level storage include:\r\n<ul><li>Improved user experience with fast, secure, and continuous access to data;</li><li>Improved storage efficiency;</li><li>Reduced cost through improved TCO;</li><li>Increased ability for IT to support new business opportunities by leveraging the latest technologies like artificial intelligence (AI), machine learning (ML), deep learning (DL), and cloud.</li></ul>","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Entry_Level_Storage.png","alias":"entry-level-storage"},"507":{"id":507,"title":"Mission Critical Storage","description":" As enterprises become more digital, the role of mission-critical applications on which the functioning of the business depends. In practice, this requires more platform flexibility to serve both traditional applications and modern cloud computing.\r\nIT professionals who are already fully loaded with support for traditional corporate tools, such as virtualization or database management systems, have to implement and maintain modern applications such as containers or analytics.\r\nServer virtualization has almost become the main driver for the development of storage virtualization, especially since virtual machines have already penetrated quite a lot into the critical applications segment.\r\nData storage systems help to cope with the ever-growing volumes of data, allowing you to effectively work with information. Storage systems for mission-critical applications are focused on the needs of companies of various sizes - from remote branches to large enterprises with significant amounts of information.\r\nAlso many factors affect the selection of a data center location, but utility infrastructure, uptime, talent, and speed are always the focal points.\r\nFew people are unaware of the large electric loads (usage) of data centers. Naturally, due to the amount of power they need, data centers are very price-sensitive to a location’s cost of electricity. The cost is more than centers per kWh, though. Data centers have unique ramp-up needs and reserved capacity demands. The utility’s ability to accommodate these requirements can have a significant impact on cost. Likewise, the mission-critical aspect of the data center, requiring it to be online at all times, drives rigorous power redundancy and reliability requirements. The utility’s “cost-to-serve” and revenue credit policies must be factored into the overall cost of providing the requisite power.","materialsDescription":" <span style=\"font-weight: bold;\">What is mission-critical data?</span>\r\nA 'mission-critical' operation, system or facility may sound fairly straightforward – something that is essential to the overall operations of a business or process within a business. Essentially, something that is critical to the mission.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Mission_Critical_Storage.png","alias":"mission-critical-storage"},"509":{"id":509,"title":"Converged and Hyper Converged System","description":" Converged and hyper convergent infrastructures simplify support for virtual desktop infrastructure and desktop virtualization, as they are designed to be easy to install and perform complex tasks.\r\nConvergent infrastructure combines the four main components of a data center in one package: computing devices, storage devices, network devices, and server virtualization tools. Hyper-converged infrastructure allows for tighter integration of a larger number of components using software tools.\r\nIn both convergent and hyper-convergent infrastructure, all elements are compatible with each other. Thanks to this, you will be able to purchase the necessary storage devices and network devices for your company at a time, and they, as you know, are of great importance in the virtual desktops infrastructure. This allows you to simplify the process of deploying such an infrastructure - something that has been waiting for and what will be rejoiced by many companies that need to virtualize their desktop systems.\r\nDespite its value and innovation, there are several questions to these technologies regarding their intended use and differences. Let's try to figure out what functionality offers converged and hyper-convergent infrastructures and how they differ.","materialsDescription":" <span style=\"font-weight: bold;\">What is converged infrastructure?</span>\r\nConvergent infrastructure combines computing devices, storage, network devices and server virtualization tools in one chassis so that they can be managed from one place. Management capabilities may include the management of virtual desktop infrastructure, depending on the selected configuration and manufacturer.\r\nThe hardware included in the bundled converged infrastructure is pre-configured to support any targets: virtual desktop infrastructures, databases, special applications, and so on. But in fact, you do not have enough freedom to change the selected configuration.\r\nRegardless of the method chosen for extending the virtual desktop infrastructure environment, you should understand that subsequent vertical scaling will be costly and time-consuming. Adding individual components is becoming complex and depriving you of the many benefits of a converged infrastructure. Adding workstations and expanding storage capacity in a corporate infrastructure can be just as expensive, which suggests the need for proper planning for any virtual desktop infrastructure deployment.\r\nOn the other hand, all components of a converged infrastructure can work for a long time. For example, a complete server of such infrastructure works well even without the rest of the infrastructure components.\r\n<span style=\"font-weight: bold;\">What is a hyper-convergent infrastructure?</span>\r\nThe hyper-converged infrastructure was built on the basis of converged infrastructure and the concept of a software-defined data center. It combines all the components of the usual data center in one system. All four key components of the converged infrastructure are in place, but sometimes it also includes additional components, such as backup software, snapshot capabilities, data deduplication functionality, intermediate compression, global network optimization (WAN), and much more. Convergent infrastructure relies primarily on hardware, and software-defined data center often adapts to any hardware. In the hyper-convergent infrastructure, these two possibilities are combined.\r\nHyper-converged infrastructure is supported by one supplier. It can be managed as a single system with a single set of tools. To expand the infrastructure, you just need to install blocks of necessary devices and resources (for example, storage) into the main system block. And this is done literally on the fly.\r\nSince the hyper-convergent infrastructure is software-defined (that is, the operation of the infrastructure is logically separated from the physical equipment), the mutual integration of components is denser than in a conventional converged infrastructure, and the components themselves must be nearby to work correctly. This makes it possible to use a hyper-convergent infrastructure to support even more workloads than in the case of conventional converged infrastructure. This is explained by the fact that it has the possibility of changing the principle of definition and adjustment at the program level. In addition, you can make it work with specialized applications and workloads, which pre-configured converged infrastructures do not allow.\r\nHyper-converged infrastructure is especially valuable for working with a virtual desktop infrastructure because it allows you to scale up quickly without additional costs. Often, in the case of the classic virtual desktops infrastructure, things are completely different - companies need to buy more resources before scaling or wait for virtual desktops to use the allocated space and network resources, and then, in fact, add new infrastructure.\r\nBoth scenarios require significant time and money. But, in the case of hyperconvergent infrastructure, if you need to expand the storage, you can simply install the required devices in the existing stack. Scaling can be done quickly — for the time required to deliver the equipment. In this case, you do not have to go through the full procedure of re-evaluation and reconfiguration of the corporate infrastructure.\r\nIn addition, when moving from physical PCs to virtual workstations, you will need devices to perform all the computational tasks that laptops and PCs typically perform. Hyper-converged infrastructure will greatly help with this, as it often comes bundled with a large amount of flash memory, which has a positive effect on the performance of virtual desktops. This increases the speed of I / O operations, smoothes work under high loads, and allows you to perform scanning for viruses and other types of monitoring in the background (without distracting users).\r\nThe flexibility of the hyper-converged infrastructure makes it more scalable and cost-effective compared to the convergent infrastructure since it has the ability to add computing and storage devices as needed. The cost of the initial investment for both infrastructures is high, but in the long term, the value of the investment should pay off.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Converged_and_Hyper_Converged_System.png","alias":"converged-and-hyper-converged-system"},"515":{"id":515,"title":"Tower Server","description":"A tower server is a computer that is built in an upright cabinet that stands alone and that is designed to function as a server. The cabinet is known as a tower, and multiple tower servers can work simultaneously for different tasks and processes. Tower servers are popular owing to the scalability and reliability features since unlimited servers can be added to the existing network largely because of the independent nature of the individual tower servers.\r\nTower servers support most basic applications such as system management, file management, print collaboration, ER applications, distribution and system security.\r\nThere are certain advantages in using tower servers. A tower server is robust and simple in nature. As overall component density is low, easier cooling is possible in tower servers. Possible damage, overheating or downtime can thus be prevented. The scalability factor is high in tower servers, and it is much easier to add servers to a simple network, leading to adaptable integration. Again, the maintenance factor is less when compared to other designs. Easy identification both on the network and physically is possible in tower servers as the data are usually stored in a single tower and not across various devices.\r\nThe cabling involved in tower servers can be complicated, and several tower servers in a single location could be noisy due to the fact that each tower might need a dedicated fan. An individual monitor, mouse or keyboard is required for each tower server, or a keyboard, video and mouse (KVM) switch needs to be available for managing devices using a single set of equipment. Again, in comparison to blade servers or rack servers, tower servers could be more bulky.","materialsDescription":" <span style=\"font-weight: bold;\">What is a tower server?</span>\r\nA tower server is a computer that is built in an upright cabinet that stands alone and that is designed to function as a server. The cabinet is known as a tower, and multiple tower servers can work simultaneously for different tasks and processes. Tower servers are popular owing to the scalability and reliability features since unlimited servers can be added to the existing network largely because of the independent nature of the individual tower servers.\r\n<span style=\"font-weight: bold;\">What are the advantages of a tower server?</span>\r\n<ul><li>Easier cooling, because the overall component density is fairly low.</li></ul>\r\nA tower server is robust and simple in nature. As overall component density is low, easier cooling is possible in tower servers. Therefore, it can prevent possible damage, overheating or downtime.\r\n<ul><li>Scalability, an unlimited number of servers can be added to an existing network.</li></ul>\r\nThe scalability factor is high in tower servers, and it is much easier to add servers to a simple network, leading to adaptable integration. Easy identification both on the network and physically is possible in tower servers as the data are usually stored in a single tower and not across various devices.\r\n<span style=\"font-weight: bold;\">What are the disadvantages of tower server?</span>\r\n<ul><li>A set of tower servers is bulkier and heavier than an equivalent blade server or set of rack servers.</li><li>A group of several air-cooled tower servers in a single location can be noisy because each tower requires a dedicated fan.</li></ul>","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Tower_Server.png","alias":"tower-server"},"517":{"id":517,"title":"Blade System","description":" A blade server is a stripped-down server computer with a modular design optimized to minimize the use of physical space and energy. Blade servers have many components removed to save space, minimize power consumption and other considerations, while still having all the functional components to be considered a computer. Unlike a rack-mount server, a blade server needs a blade enclosure, which can hold multiple blade servers, providing services such as power, cooling, networking, various interconnects and management. Together, blades and the blade enclosure form a blade system. Different blade providers have differing principles regarding what to include in the blade itself, and in the blade system as a whole.\r\nIn a standard server-rack configuration, one rack unit or 1U—19 inches (480 mm) wide and 1.75 inches (44 mm) tall—defines the minimum possible size of any equipment. The principal benefit and justification of blade computing relates to lifting this restriction so as to reduce size requirements. The most common computer rack form-factor is 42U high, which limits the number of discrete computer devices directly mountable in a rack to 42 components. Blades do not have this limitation. As of 2014, densities of up to 180 servers per blade system (or 1440 servers per rack) are achievable with blade systems.\r\nEnclosure (or chassis) performs many of the non-core computing services found in most computers. Non-blade systems typically use bulky, hot and space-inefficient components, and may duplicate these across many computers that may or may not perform at capacity. By locating these services in one place and sharing them among the blade computers, the overall utilization becomes higher. The specifics of which services are provided varies by vendor.\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">Power.</span></span> Computers operate over a range of DC voltages, but utilities deliver power as AC, and at higher voltages than required within computers. Converting this current requires one or more power supply units (or PSUs). To ensure that the failure of one power source does not affect the operation of the computer, even entry-level servers may have redundant power supplies, again adding to the bulk and heat output of the design.\r\nThe blade enclosure's power supply provides a single power source for all blades within the enclosure. This single power source may come as a power supply in the enclosure or as a dedicated separate PSU supplying DC to multiple enclosures. This setup reduces the number of PSUs required to provide a resilient power supply.\r\nThe popularity of blade servers, and their own appetite for power, has led to an increase in the number of rack-mountable uninterruptible power supply (or UPS) units, including units targeted specifically towards blade servers (such as the BladeUPS).\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">Cooling.</span></span> During operation, electrical and mechanical components produce heat, which a system must dissipate to ensure the proper functioning of its components. Most blade enclosures, like most computing systems, remove heat by using fans.\r\nA frequently underestimated problem when designing high-performance computer systems involves the conflict between the amount of heat a system generates and the ability of its fans to remove the heat. The blade's shared power and cooling means that it does not generate as much heat as traditional servers. Newer blade-enclosures feature variable-speed fans and control logic, or even liquid cooling systems that adjust to meet the system's cooling requirements.\r\nAt the same time, the increased density of blade-server configurations can still result in higher overall demands for cooling with racks populated at over 50% full. This is especially true with early-generation blades. In absolute terms, a fully populated rack of blade servers is likely to require more cooling capacity than a fully populated rack of standard 1U servers. This is because one can fit up to 128 blade servers in the same rack that will only hold 42 1U rack mount servers.\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">Networking.</span></span> Blade servers generally include integrated or optional network interface controllers for Ethernet or host adapters for Fibre Channel storage systems or converged network adapter to combine storage and data via one Fibre Channel over Ethernet interface. In many blades at least one interface is embedded on the motherboard and extra interfaces can be added using mezzanine cards.\r\nA blade enclosure can provide individual external ports to which each network interface on a blade will connect. Alternatively, a blade enclosure can aggregate network interfaces into interconnect devices (such as switches) built into the blade enclosure or in networking blades.\r\nBlade servers function well for specific purposes such as web hosting, virtualization, and cluster computing. Individual blades are typically hot-swappable. As users deal with larger and more diverse workloads, they add more processing power, memory and I/O bandwidth to blade servers. Although blade server technology in theory allows for open, cross-vendor system, most users buy modules, enclosures, racks and management tools from the same vendor.\r\nEventual standardization of the technology might result in more choices for consumers; as of 2009 increasing numbers of third-party software vendors have started to enter this growing field.\r\nBlade servers do not, however, provide the answer to every computing problem. One can view them as a form of productized server-farm that borrows from mainframe packaging, cooling, and power-supply technology. Very large computing tasks may still require server farms of blade servers, and because of blade servers' high power density, can suffer even more acutely from the heating, ventilation, and air conditioning problems that affect large conventional server farms.","materialsDescription":" <span style=\"font-weight: bold;\">What is blade server?</span>\r\nA blade server is a server chassis housing multiple thin, modular electronic circuit boards, known as server blades. Each blade is a server in its own right, often dedicated to a single application. The blades are literally servers on a card, containing processors, memory, integrated network controllers, an optional Fiber Channel host bus adaptor (HBA) and other input/output (IO) ports.\r\nBlade servers allow more processing power in less rack space, simplifying cabling and reducing power consumption. According to a SearchWinSystems.com article on server technology, enterprises moving to blade servers can experience as much as an 85% reduction in cabling for blade installations over conventional 1U or tower servers. With so much less cabling, IT administrators can spend less time managing the infrastructure and more time ensuring high availability.\r\nEach blade typically comes with one or two local ATA or SCSI drives. For additional storage, blade servers can connect to a storage pool facilitated by a network-attached storage (NAS), Fiber Channel, or iSCSI storage-area network (SAN). The advantage of blade servers comes not only from the consolidation benefits of housing several servers in a single chassis, but also from the consolidation of associated resources (like storage and networking equipment) into a smaller architecture that can be managed through a single interface.\r\nA blade server is sometimes referred to as a high-density server and is typically used in a clustering of servers that are dedicated to a single task, such as:\r\n<ul><li>File sharing</li><li>Web page serving and caching</li><li>SSL encrypting of Web communication</li><li>The transcoding of Web page content for smaller displays</li><li>Streaming audio and video content</li></ul>\r\nLike most clustering applications, blade servers can also be managed to include load balancing and failover capabilities.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Blade_System.png","alias":"blade-system"},"519":{"id":519,"title":"Density Optimized Server","description":" The high-density server system is a modern concept of building an economical and scalable computing equipment subsystem within the data processing center (hereinafter referred to as the data center).\r\nThe high-density server system includes server equipment, modules of the organization of network interaction, technologies of resource virtualization and has constructive opportunities to install all the components of a modern data center within a single structural unit (chassis).\r\nThe virtualization tools used and the adaptive management system combines the high-density server system resources for collective use in processing various combinations of workloads.\r\nThe high-density server system in the information system infrastructure allows achieving significant cost savings by compacting components and reducing the number of cable connections, jointly managing systems, using virtualization tools, reducing power and cooling costs, simplifying deployment and the possibility of rapid interchangeability of server equipment.\r\nThe high-density server system can be used as a subsystem of corporate data centers, as well as act as a computing center for an information system of a small company, thanks to its design features and applied technologies.","materialsDescription":" <span style=\"font-weight: bold;\">The High-Density Server System Structure</span>\r\nThe composition of the high-density server system includes:\r\n<ul><li>server equipment;</li><li>interconnect modules;</li><li>software (software);</li><li>management subsystem the high-density server system.</li></ul>\r\nConstructive the high-density server system is designed to install servers of special performance, called the "blade" (from the English "blade"). At the level of the system and application software, the “blade” does not differ from a typical server installed in a standard mounting rack.\r\nSSVP includes a universal chassis with redundant input-output systems, power, cooling and control, as well as blade servers and storage of similar performance. The use of the high-density server system means the provision of a functional management subsystem and services for installation, launch and maintenance.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Density_Optimized_Server.png","alias":"density-optimized-server"},"521":{"id":521,"title":"Mission Critical Server","description":" Mission-critical refers to any factor of a system (equipment, process, procedure, software, etc.) whose failure will result in the failure of business operations. It is critical to the organization's "mission.\r\nThe mission-critical server is a system whose failure may result in the failure of some goal-directed activity. An example of a mission-critical system is a navigational system. The difference between mission-critical and business-critical is in the global activity and the possibility of whole personal life blackout. A business-critical system fault can influence only to a single company or a bunch of them and can partly stop lifetime activity.\r\nWith mission critical servers you get best in class reliability and uptime with screaming performance for the workloads that run your enterprise.\r\nA mission critical system is a system that is essential to the survival of a business or organization. When a mission critical system fails or is interrupted, business operations are significantly impacted.\r\nA mission-critical system is also known as mission essential equipment and mission critical application. ","materialsDescription":" <span style=\"font-weight: bold;\">What is a mission-critical server?</span>\r\nA mission-critical server is a system that is essential to the continuity of the operations of a business or organization. When a mission-critical server fails or is interrupted, business operations are significantly impacted.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Mission_Critical_Server.png","alias":"mission-critical-server"},"536":{"id":536,"title":"WAN optimization - appliance","description":" WAN optimization appliance is a collection of techniques for increasing data-transfer efficiencies across wide-area networks (WANs). In 2008, the WAN optimization market was estimated to be $1 billion and was to grow to $4.4 billion by 2014 according to Gartner, a technology research firm. In 2015 Gartner estimated the WAN optimization market to be a $1.1 billion market.\r\nThe most common measures of TCP data-transfer efficiencies (i.e., optimization) are throughput, bandwidth requirements, latency, protocol optimization, and congestion, as manifested in dropped packets. In addition, the WAN itself can be classified with regards to the distance between endpoints and the amounts of data transferred. Two common business WAN topologies are Branch to Headquarters and Data Center to Data Center (DC2DC). In general, "Branch" WAN links are closer, use less bandwidth, support more simultaneous connections, support smaller connections and more short-lived connections, and handle a greater variety of protocols. They are used for business applications such as email, content management systems, database application, and Web delivery. In comparison, "DC2DC" WAN links tend to require more bandwidth, are more distant and involve fewer connections, but those connections are bigger (100 Mbit/s to 1 Gbit/s flows) and of longer duration. Traffic on a "DC2DC" WAN may include replication, back up, data migration, virtualization, and other Business Continuity/Disaster Recovery (BC/DR) flow.\r\nWAN optimization has been the subject of extensive academic research almost since the advent of the WAN. In the early 2000s, research in both the private and public sectors turned to improve the end-to-end throughput of TCP, and the target of the first proprietary WAN optimization solutions was the Branch WAN. In recent years, however, the rapid growth of digital data, and the concomitant needs to store and protect it, has presented a need for DC2DC WAN optimization. For example, such optimizations can be performed to increase overall network capacity utilization, meet inter-datacenter transfer deadlines, or minimize average completion times of data transfers. As another example, private inter-datacenter WANs can benefit optimizations for fast and efficient geo-replication of data and content, such as newly computed machine learning models or multimedia content.\r\nComponent techniques of Branch WAN Optimization include deduplication, wide-area file services (WAFS), SMB proxy, HTTPS Proxy, media multicasting, web caching, and bandwidth management. Requirements for DC2DC WAN Optimization also center around deduplication and TCP acceleration, however, these must occur in the context of multi-gigabit data transfer rates. ","materialsDescription":" <span style=\"font-weight: bold;\">What techniques does WAN optimization have?</span>\r\n<ul><li><span style=\"font-weight: bold;\">Deduplication</span> – Eliminates the transfer of redundant data across the WAN by sending references instead of the actual data. By working at the byte level, benefits are achieved across IP applications.</li><li><span style=\"font-weight: bold;\">Compression</span> – Relies on data patterns that can be represented more efficiently. Essentially compression techniques similar to ZIP, RAR, ARJ, etc. are applied on-the-fly to data passing through hardware (or virtual machine) based WAN acceleration appliances.</li><li><span style=\"font-weight: bold;\">Latency optimization</span> – Can include TCP refinements such as window-size scaling, selective acknowledgments, Layer 3 congestion control algorithms, and even co-location strategies in which the application is placed in near proximity to the endpoint to reduce latency. In some implementations, the local WAN optimizer will answer the requests of the client locally instead of forwarding the request to the remote server in order to leverage write-behind and read-ahead mechanisms to reduce WAN latency.</li><li><span style=\"font-weight: bold;\">Caching/proxy</span> – Staging data in local caches; Relies on human behavior, accessing the same data over and over.</li><li><span style=\"font-weight: bold;\">Forward error correction</span> – Mitigates packet loss by adding another loss-recovery packet for every “N” packets that are sent, and this would reduce the need for retransmissions in error-prone and congested WAN links.</li><li><span style=\"font-weight: bold;\">Protocol spoofing</span> – Bundles multiple requests from chatty applications into one. May also include stream-lining protocols such as CIFS.</li><li><span style=\"font-weight: bold;\">Traffic shaping</span> – Controls data flow for specific applications. Giving flexibility to network operators/network admins to decide which applications take precedence over the WAN. A common use case of traffic shaping would be to prevent one protocol or application from hogging or flooding a link over other protocols deemed more important by the business/administrator. Some WAN acceleration devices are able to traffic shape with granularity far beyond traditional network devices. Such as shaping traffic on a per-user AND per application basis simultaneously.</li><li><span style=\"font-weight: bold;\">Equalizing</span> – Makes assumptions on what needs immediate priority based on data usage. Usage examples for equalizing may include wide open unregulated Internet connections and clogged VPN tunnels.</li><li><span style=\"font-weight: bold;\">Connection limits</span> – Prevents access gridlock in and to denial of service or to peer. Best suited for wide-open Internet access links, can also be used links.</li><li><span style=\"font-weight: bold;\">Simple rate limits</span> – Prevents one user from getting more than a fixed amount of data. Best suited as a stop-gap first effort for remediating a congested Internet connection or WAN link.</li></ul>","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_WAN_optimization_appliance.png","alias":"wan-optimization-appliance"},"542":{"id":542,"title":"UTM - Unified Threat Management Appliance","description":"A unified threat management (UTM) system is a type of network hardware appliance that protects businesses from security threats in a simplified way by combining and integrating multiple security services and features.<br />UTM devices are often packaged as network security appliances that can help protect networks against combined security threats, including malware and attacks that simultaneously target separate parts of the network.\r\nWhile UTM systems and next-generation firewalls (NGFWs) are sometimes comparable, UTM devices include added security features that NGFWs don't offer.\r\nUTM systems provide increased protection and visibility, as well as control over network security, which reduces complexity. UTM systems typically do this via inspection methods that address different types of threats.\r\nThese methods include:\r\n<ul><li>Flow-based inspection, also known as stream-based inspection, samples data that enters a UTM device, and then uses pattern matching to determine whether there is malicious content in the data flow.</li><li>Proxy-based inspection acts as a proxy to reconstruct the content entering a UTM device, and then executes a full inspection of the content to search for potential security threats. If the content is clean, the device sends the content to the user. However, if a virus or other security threat is detected, the device removes the questionable content, and then sends the file or webpage to the user.</li></ul>\r\nUTM devices provide a single platform for multiple network security functions and offer the benefit of a single interface for those security functions, as well as a single point of interface to monitor or analyze security logs for those different functions.<br /><br />","materialsDescription":"<span style=\"font-weight: bold;\">How do UTM Appliances block a computer virus — or many viruses?</span>\r\nUnified threat management appliances have gained traction in the industry due to the emergence of blended threats, which are combinations of different types of malware and attacks that target separate parts of the network simultaneously. Preventing these types of attacks can be difficult when using separate appliances and vendors for each specific security task, as each aspect has to be managed and updated individually in order to remain current in the face of the latest forms of malware and cybercrime. By creating a single point of defense and providing a single console, UTM solutions make dealing with varied threats much easier.\r\nWhile unified threat management solutions do solve some network security issues, they aren't without some drawbacks, with the biggest one being that the single point of defense that an UTM appliance provides also creates a single point of failure. Because of this, many organizations choose to supplement their UTM device with a second software-based perimeter to stop any malware that got through or around the UTM firewall.\r\nWhat kind of companies use a Unified Threat Management system?\r\nUTM was originally for small to medium office businesses to simplify their security systems. But due to its almost universal applicability, it has since become popular with all sectors and larger enterprises. Developments in the technology have allowed it to scale up, opening UTM up to more types of businesses that are looking for a comprehensive gateway security solution.\r\n<span style=\"font-weight: bold;\">What security features does Unified Threat Management have?</span>\r\nAs previously mentioned, most UTM services include a firewall, antivirus and intrusion detection and prevention systems. But they also can include other services that provide additional security.\r\n<ul><li>Data loss prevention software to stop data from exfiltrating the business, which in turn prevents a data leak from occurring.</li><li>Security information and event management software for real-time monitoring of network health, which allows threats and points of weakness to be identified.</li><li>Bandwidth management to regulate and prioritize network traffic, ensuring everything is running smoothly without getting overwhelmed.</li><li>Email filtering to remove spam and dangerous emails before they reach the internal network, lowering the chance of a phishing or similar attack breaching your defenses.</li><li>Web filtering to prevent connections to dangerous or inappropriate sites from a machine on the network. This lowers the chance of infection through malvertising or malicious code on the page. It can also be used to increase productivity within a business, i.e. blocking or restricting social media, gaming sites, etc.</li><li>Application filtering to either a blacklist or whitelist which programs can run, preventing certain applications from communicating in and out of the network, i.e. Facebook messenger.</li></ul>\r\n<span style=\"font-weight: bold;\">What are the benefits of Unified Threat Management?</span>\r\n<ul><li><span style=\"font-weight: bold;\">Simplifies the network</span></li></ul>\r\nBy consolidating multiple security appliances and services into one, you can easily reduce the amount of time spent on maintaining many separate systems that may have become disorganized. This can also improve the performance of the network as there is less bloat. A smaller system also requires less energy and space to run.\r\n<ul><li><span style=\"font-weight: bold;\">Provides greater security and visibility</span></li></ul>\r\nA UTM system can include reporting tools, application filtering and virtual private network (VPN) capabilities, all of which defend your network from more types of threats or improve the existing security. Additionally, monitoring and analysis tools can help locate points of weakness or identify ongoing attacks.\r\n<ul><li><span style=\"font-weight: bold;\">Can defend from more sophisticated attacks</span></li></ul>\r\nBecause UTM defends multiple parts of a network it means that an attack targeting multiple points simultaneously can be repelled more easily. With cyber-attacks getting more sophisticated, having defenses that can match them is of greater importance.\r\nHaving several ways of detecting a threat also means a UTM system is more accurate at identifying potential attacks and preventing them from causing damage.<br /><br />","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_UTM_Unified_Threat_Management_Appliance.png","alias":"utm-unified-threat-management-appliance"},"544":{"id":544,"title":"DLP - Appliance","description":"DLP (Data Loss Prevention) is a technology for preventing leakage of confidential information from an information system to the outside, as well as technical software and hardware devices for such prevention of leakage. According to most definitions, information leakage is the unauthorized distribution of restricted access data that is not controlled by the owner of this data. This implies that the person who committed the leak has the rights to access information.\r\nThe most effective way to ensure data security on corporate computers today is to use specialized data leakage prevention tools (Data Leak Prevention or DLP). DLP solutions are designed to eliminate the “human factor” and prevent misconduct by preventing (and fixing) data leaks from a computer for as many scripts as possible.\r\nEmail and webmail services, instant messaging services, social networks and forums, cloud file storages, FTP servers - all these benefits of the Internet can at any moment be a channel for leaking corporate information, disclosure of which may be undesirable or even dangerous for business.\r\nYou shouldn’t disregard traditional local channels - data storage devices (flash drives, disks, memory cards), printers and data transfer interfaces and synchronization with smartphones.\r\nAn effective DLP solution should control the widest possible range of network communications channels, local devices, and interfaces. At the same time, the effectiveness of a DLP solution is determined by the flexibility of the settings and the ability to ensure a successful combination of business interests and security.\r\nToday, DLP products are a rapidly growing information security industry, and new products are released very often. Installing a DLP system will allow you to distinguish confidential information from the usual, which in turn will reduce the cost of the entire complex for the protection of information and resources in general. No unimportant moment when choosing a DLP-system is its price, but Data Leak Prevention has a modularity that allows you to protect the channels you need and not pay extra for protecting unnecessary ones.","materialsDescription":"<span style=\"font-weight: bold;\">What Is Data Loss Prevention (DLP)?</span>\r\nData loss prevention, or DLP, is a set of technologies, products, and techniques that are designed to stop sensitive information from leaving an organization.\r\nData can end up in the wrong hands whether it’s sent through email or instant messaging, website forms, file transfers, or other means. DLP strategies must include solutions that monitor for, detect, and block the unauthorized flow of information.\r\n<span style=\"font-weight: bold;\">How does DLP work?</span>\r\nDLP technologies use rules to look for sensitive information that may be included in electronic communications or to detect abnormal data transfers. The goal is to stop information such as intellectual property, financial data, and employee or customer details from being sent, either accidentally or intentionally, outside the corporate network.\r\n<span style=\"font-weight: bold;\">Why do organizations need DLP solutions?</span>\r\nThe proliferation of business communications has given many more people access to corporate data. Some of these users can be negligent or malicious. The result: a multitude of insider threats that can expose confidential data with a single click. Many government and industry regulations have made DLP a requirement.<br /><br />","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_DLP_Appliance.png","alias":"dlp-appliance"},"546":{"id":546,"title":"WAF-web application firewall appliance","description":"A web application firewall is a special type of application firewall that applies specifically to web applications. It is deployed in front of web applications and analyzes bi-directional web-based (HTTP) traffic - detecting and blocking anything malicious. The OWASP provides a broad technical definition for a WAF as “a security solution on the web application level which - from a technical point of view - does not depend on the application itself.” According to the PCI DSS Information Supplement for requirement 6.6, a WAF is defined as “a security policy enforcement point positioned between a web application and the client endpoint. This functionality can be implemented in hardware, running in an appliance device, or in a typical server running a common operating system. It may be a stand-alone device or integrated into other network components.” In other words, a WAF can be a physical appliance that prevents vulnerabilities in web applications from being exploited by outside threats. These vulnerabilities may be because the application itself is a legacy type or it was insufficiently coded by design. The WAF addresses these code shortcomings by special configurations of rule sets, also known as policies.\r\nPreviously unknown vulnerabilities can be discovered through penetration testing or via a vulnerability scanner. A web application vulnerability scanner, also known as a web application security scanner, is defined in the SAMATE NIST 500-269 as “an automated program that examines web applications for potential security vulnerabilities. In addition to searching for web application-specific vulnerabilities, the tools also look for software coding errors.” Resolving vulnerabilities is commonly referred to as remediation. Corrections to the code can be made in the application but typically a more prompt response is necessary. In these situations, the application of a custom policy for a unique web application vulnerability to provide a temporary but immediate fix (known as a virtual patch) may be necessary.\r\nWAFs are not an ultimate security solution, rather they are meant to be used in conjunction with other network perimeter security solutions such as network firewalls and intrusion prevention systems to provide a holistic defense strategy.\r\nWAFs typically follow a positive security model, a negative security model, or a combination of both as mentioned by the SANS Institute. WAFs use a combination of rule-based logic, parsing, and signatures to detect and prevent attacks such as cross-site scripting and SQL injection. The OWASP produces a list of the top ten web application security flaws. All commercial WAF offerings cover these ten flaws at a minimum. There are non-commercial options as well. As mentioned earlier, the well-known open source WAF engine called ModSecurity is one of these options. A WAF engine alone is insufficient to provide adequate protection, therefore OWASP along with Trustwave's Spiderlabs help organize and maintain a Core-Rule Set via GitHub to use with the ModSecurity WAF engine.","materialsDescription":"A Web Application Firewall or WAF provides security for online services from malicious Internet traffic. WAFs detect and filter out threats such as the OWASP Top 10, which could degrade, compromise or bring down online applications.\r\n<span style=\"font-weight: bold;\">What are Web Application Firewalls?</span>\r\nWeb application firewalls assist load balancing by examining HTTP traffic before it reaches the application server. They also protect against web application vulnerability and unauthorized transfer of data from the web server at a time when security breaches are on the rise. According to the Verizon Data Breach Investigations Report, web application attacks were the most prevalent breaches in 2017 and 2018.\r\nThe PCI Security Standards Council defines a web application firewall as “a security policy enforcement point positioned between a web application and the client endpoint. This functionality can be implemented in software or hardware, running in an appliance device, or in a typical server running a common operating system. It may be a stand-alone device or integrated into other network components.”\r\n<span style=\"font-weight: bold;\">How does a Web Application Firewall wWork?</span>\r\nA web application firewall (WAF) intercepts and inspects all HTTP requests using a security model based on a set of customized policies to weed out bogus traffic. WAFs block bad traffic outright or can challenge a visitor with a CAPTCHA test that humans can pass but a malicious bot or computer program cannot.\r\nWAFs follow rules or policies customized to specific vulnerabilities. As a result, this is how WAFs prevent DDoS attacks. Creating the rules on a traditional WAF can be complex and require expert administration. The Open Web Application Security Project maintains a list of the OWASP top web application security flaws for WAF policies to address.\r\nWAFs come in the form of hardware appliances, server-side software, or filter traffic as-a-service. WAFs can be considered as reverse proxies i.e. the opposite of a proxy server. Proxy servers protect devices from malicious applications, while WAFs protect web applications from malicious endpoints.\r\n<span style=\"font-weight: bold;\">What Are Some Web Application Firewall Benefits?</span>\r\nA web application firewall (WAF) prevents attacks that try to take advantage of the vulnerabilities in web-based applications. The vulnerabilities are common in legacy applications or applications with poor coding or designs. WAFs handle the code deficiencies with custom rules or policies.\r\nIntelligent WAFs provide real-time insights into application traffic, performance, security and threat landscape. This visibility gives administrators the flexibility to respond to the most sophisticated attacks on protected applications.\r\nWhen the Open Web Application Security Project identifies the OWASP top vulnerabilities, WAFs allow administrators to create custom security rules to combat the list of potential attack methods. An intelligent WAF analyzes the security rules matching a particular transaction and provides a real-time view as attack patterns evolve. Based on this intelligence, the WAF can reduce false positives.\r\n<span style=\"font-weight: bold;\">What Is the Difference Between a Firewall and a Web Application Firewall?</span>\r\nA traditional firewall protects the flow of information between servers while a web application firewall is able to filter traffic for a specific web application. Network firewalls and web application firewalls are complementary and can work together.\r\nTraditional security methods include network firewalls, intrusion detection systems (IDS) and intrusion prevention systems (IPS). They are effective at blocking bad L3-L4 traffic at the perimeter on the lower end (L3-L4) of the Open Systems Interconnection (OSI) model. Traditional firewalls cannot detect attacks in web applications because they do not understand Hypertext Transfer Protocol (HTTP) which occurs at layer 7 of the OSI model. They also only allow the port that sends and receives requested web pages from an HTTP server to be open or closed. This is why web application firewalls are effective for preventing attacks like SQL injections, session hijacking and Cross-Site Scripting (XSS).\r\n<span style=\"font-weight: bold;\">When Should You Use a Web Application Firewall?</span>\r\nAny business that uses a website to generate revenue should use a web application firewall to protect business data and services. Organizations that use online vendors should especially deploy web application firewalls because the security of outside groups cannot be controlled or trusted.\r\n<span style=\"font-weight: bold;\">How Do You Use a Web Application Firewall?</span>\r\nA web application firewall requires correct positioning, configuration, administration and monitoring. Web application firewall installation must include the following four steps: secure, monitor, test and improve. This should be a continuous process to ensure application specific protection.<br />The configuration of the firewall should be determined by the business rules and guardrails by the company’s security policy. This approach will allow the rules and filters in the web application firewall to define themselves.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_WAF_web_application_firewall_appliance.png","alias":"waf-web-application-firewall-appliance"},"550":{"id":550,"title":"Web filtering - Appliance","description":" <span style=\"font-weight: bold; \">A web filter appliance</span> is a device that allows the user to filter all online content for censorship purposes, such that any links, downloads, and email containing offensive materials or pornography is outright blocked or removed. Web filtering appliance can also help you prevent malware infection because, more often than not, malware is usually hidden within links that promise porn or controversial content. Moreover, because the number of online hazards is un stopped increasing every day, it's always prudent to get a web filter appliance that can adapt to the changing times and the ever-evolving hazards posed by the Internet.\r\nAt any rate, content filtering appliance has a distinct advantage over their software counterparts in terms of stable restriction features, unrestricted monitoring, no platform-based limitations, easy upgrades and improvements, and so on. That's because the best web filters are fully integrated software and hardware systems that optimize their hybrid attributes when it comes to content filtering by gaining full, unmitigated control over online usage through well-defined policies as mandated by the owner of the network or the IT security administrator.\r\nGetting a web content filtering appliance that has a list of premium-grade and detailed content analysis with predefined categories (which includes keywords for pornography, game downloads, drugs, violence, adult content, offensive content, racist content, controversial content, and the like) is a must for any major network. All of the items you'll ever need to block should be easily selectable with a click of your mouse as well; after all, sophisticated technology aside, a good web filter appliance should also be intuitive and practical to use as well.<br /> ","materialsDescription":"<h1 class=\"align-center\">How a Web Content Filter Appliance Works</h1>\r\n<p class=\"align-left\">Typically a web content filter appliance protects Internet users and networks by using a combination of blacklists, URIBL and SURBL filters, category filters and keyword filters. Blacklists, URIBL and SURBL filters work together to prevent users visiting websites known to harbor malware, those that have been identified as fake phishing sites, and those who hid their true identity by using the whois privacy feature or a proxy server. Genuine websites have no reason to hide their true identity.</p>\r\n<p class=\"align-left\">In the category filtering process, the content of millions of webpages are analyzed and assigned a category. System administrators can then choose which categories to block access to (i.e. online shopping, alcohol, pornography, gambling, etc.) depending on whether the web content filter appliance is providing a service to a business, a store, a school, a restaurant, or a workplace. Most appliances for filtering web content also offer the facility to create bespoke categories.</p>\r\n<p class=\"align-left\">Keyword filters have multiple uses. They can be used to block access to websites containing specific words (for example the business name of a competitor), specific file extensions (typically those most commonly used for deploying malware and ransomware), and specific web applications; if, for example, a business wanted to allow its marketing department access to Facebook, but not FaceTime. Effectively, the keyword filters fine-tune the category settings, enhance security and increase productivity.</p>\r\n<h1 class=\"align-center\">Are there any home web filter appliance?</h1>\r\nFor children today, the Internet has always existed. To them, it’s second nature to pop online and watch a funny video, find a fact, or chat with a friend. But, of course, the Internet is also filled with a lot of dark corners (It’s a hop, skip, and a click to adult content). Parents, then, are presented with the daunting task of not only monitoring what sites their children visit but also their screen time consumption. There are a number of home content filtering appliance that allow parents to do just this. The best parental control apps and devices, be they hardware or software, not only put parents in command of such things as the content their children can view and the amount of time they can spend online but help restore a parent’s sense of control. With them, parents, from can restrict access to only specific sites and apps, filter dangerous or explicit web-content, manage time, and even track their location.\r\n\r\n","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Web_filtering_Appliance.png","alias":"web-filtering-appliance"},"552":{"id":552,"title":"Secure Web Gateway - Appliance","description":"Secure web gateways are generally appliance-based security solutions that prevent advanced threats, block unauthorized access to systems or websites, stop malware, and monitor real-time activity across websites accessed by users within the institution.\r\nA secure web gateway is primarily used to monitor and prevent malicious traffic and data from entering, or even leaving, an organization’s network. Typically, it is implemented to secure an organization against threats originating from the Internet, websites and other Web 2.0 products/services. It is generally implemented through a hardware gateway device implemented at the outer boundaries of a network. Some of the features a secure Web gateway provides include URL filtering, application-level control, data leakage prevention, and virus/malware code detection.\r\nA Secure web gateway (SWG) protects users against phishing, malware and other Internet-borne threats. Unlike traditional firewalls, SWGs are focused on layer 7 web traffic inspection, both inbound and outbound. As web security solutions, they apply no protection to WAN traffic, which is left to the corporate next generation firewalls. In recent years, SWGs appeared as a cloud service. The cloud instances enable secure web and cloud access from anywhere – including outside the office by mobile users. The traffic coverage and solution form factor remain the key distinctions between SWGs and next generation firewalls, which often provide a very similar level of security capabilities.\r\nA converged, cloud-based network security solution converges the capabilities of a next generation firewall (WAN and Internet traffic inspection) and the extended coverage for mobile users of SWGs.\r\nA converged approach eliminates the need to maintain policies across multiple point solutions and the appliance life cycle.","materialsDescription":"<span style=\"font-weight: bold;\">Why is a secure web gateway important?</span>\r\nSecure web gateways have become increasingly common as cybercriminals have grown more sophisticated in embedding threat vectors into seemingly innocuous or professional-looking websites. These counterfeit websites can compromise the enterprise as users access them, unleashing malicious code and unauthorized access in the background without the user's knowledge. These fake, criminal websites can be quite convincing.\r\nSome of these scam websites appear to be so authentic that they can convince users to enter credit card numbers and personal identification information (PII) such as social security numbers. Other sites require only the connection to the user to bypass web browser controls and inject malicious code such as viruses or malware into the user's network. Examples include fake online shopping sites posing as brand-name sellers, sites that appear to be legitimate government agencies and even business-to-business intranets. Secure web gateways can also prevent data from flowing out of an organization, making certain that restricted data is blocked from leaving the organization.\r\n<span style=\"font-weight: bold;\">How does a secure web gateway work?</span>\r\nSecure web gateways are installed as a software component or a hardware device on the edge of the network or at user endpoints. All traffic to and from users to other networks must pass through the gateway that monitors it. The gateway monitors this traffic for malicious code, web application use, and all user/non-user attempted URL connections.\r\nThe gateway checks or filters website URL addresses against stored lists of known and approved websites—all others not on the approved lists can be explicitly blocked. Known malicious sites can be explicitly blocked as well. URL filters that maintain allowed web addresses are maintained in whitelists, while known, off-limits sites that are explicitly blocked are maintained in blacklists. In enterprises, these lists are maintained in the secure gateway's database, which then applies the list filters to all incoming and outgoing traffic.\r\nSimilarly, data flowing out of the network can be checked, disallowing restricted data sources—data on the network or user devices that are prohibited from distribution. Application-level controls can also be restricted to known and approved functions, such as blocking uploads to software-as-a-service (SaaS) applications (such as Office 365 and Salesforce.com). Although some enterprises deploy secure web gateways in hardware appliances that filter all incoming and outgoing traffic, many organizations use cloud-based, SaaS secure web gateways as a more flexible and less costly solution to deploy and maintain. Organizations with existing hardware investments often combine the two, using hardware at their larger physical sites and cloud-based gateways for remote locations and traveling workers.\r\n<span style=\"font-weight: bold;\">What are some features of secure web gateways?</span>\r\nBeyond basic URL, web application control and data filtering, secure web gateways should provide additional controls and features that enhance network security.\r\n<ul><li>Encrypted traffic analysis. The gateway should compare all traffic to local and global threat lists and reputation sources first, then also analyze the nature of the traffic itself to determine if any content or code poses a threat to the network. This should include SSL-based encrypted traffic.</li><li>Data Loss Prevention. If, for example, a website accepts uploaded documents or data, the documents should first be scanned for sensitive data before being uploaded.</li><li>Social media protection. All information to and from social media should be scanned and filtered.</li><li>Support for all protocols. HTTP, HTTPS, and FTP internet protocols must be supported. While HTTPS is the industry standard now, many sites still support HTTP and FTP connections.</li><li>Integration with zero-day anti-malware solutions. Threats will be discovered, and integration with anti-malware solutions that can detect zero-day (never seen before) threats deliver the best prevention and remediation.</li><li>Integration with security monitoring. Security administrators should be notified of any web gateway security problems via their monitoring solution of choice, typically a security information and event management (SIEM) solution.</li><li>Choice of location. Choose where your secure web gateway best fits in your network—the edge, at endpoints, or in the cloud.</li></ul>","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Secure_Web_Gateway_Appliance.png","alias":"secure-web-gateway-appliance"},"556":{"id":556,"title":"Antispam - Appliance","description":"Anti-spam appliances are software or hardware devices integrated with on-board software that implement spam filtering and/or anti-spam for instant messaging (also called "spim") and are deployed at the gateway or in front of the mail server. They are normally driven by an operating system optimized for spam filtering. They are generally used in larger networks such as companies and corporations, ISPs, universities, etc.\r\nThe reasons hardware anti-spam appliances might be selected instead of software could include:\r\n<ul><li>The customer prefers to buy hardware rather than software</li><li>Ease of installation</li><li>Operating system requirements</li><li>Independence of existing hardware</li></ul>","materialsDescription":"<span style=\"font-weight: bold;\">How does an Antispam Appliance Work?</span>\r\nSince an antispam appliance is hardware, it can be placed at the entry point of the email server to inspect and filter every message that enters the email server. An antispam appliance is capable of evaluating IP addresses that are included in the email messages from the sender. The appliance can also examine the message content and then compare it against the criteria and parameters that have been set for receiving email messages.\r\n<span style=\"font-weight: bold;\">Advantages of an Antispam Appliance</span>\r\nAntispam appliances are capable of providing more email security to large networks because it is hardware that is specifically designed to handle email security on larger networks. Also, since an antispam appliance is hardware, it is much easier to install and configure on a network, as opposed to software that may require a specific operating system infrastructure. For example, if the organization is running the Linux operating system, this type of system will not support antispam filtering software.\r\nAnother advantage of using an antispam appliance is its ability to protect a large network from codes that are designed to destroy the individual computers on the network. These are malicious codes that can enter the email server and then transmit to the email client via spam. When the individual computers get infected, it slows the productivity of the organization and interrupts the network processes.\r\nAlthough many large networks deploy a vulnerability assessment program that can protect the network against criminals with malicious intent, sometimes vulnerability assessment is not enough to protect the massive amounts of email that enter an email server on a large network. This is why it is important to deploy an antispam appliance to provide added security for your email server and the email clients on the individual computers that are connected to the network.<br /><br />","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Antispam_Appliance.png","alias":"antispam-appliance"},"558":{"id":558,"title":"Secure E-mail Gateway - Appliance","description":"According to technology research firm Gartner, secure email gateways “provide basic message transfer agent functions; inbound filtering of spam, phishing, malicious and marketing emails; and outbound data loss prevention (DLP) and email encryption.”\r\nTo put that in simpler language, a secure email gateway (also called an email security gateway) is a cybersecurity solution that monitors incoming and outgoing messages for suspicious behavior, preventing them from being delivered. Secure email gateways can be deployed via an email server, public cloud, on-premises software, or in a hybrid system. According to cybersecurity experts, none of these deployment options are inherently superior; each one has its own strengths and weaknesses that must be assessed by the individual enterprise.\r\nGartner defines the secure email gateway market as mature, with the key capabilities clearly defined by market demands and customer satisfaction. These capabilities include:\r\n<ul><li>Basic and next-gen anti-phishing and anti-spam</li><li>Additional security features</li><li>Customization of the solution’s management features</li><li>Low false positive and false negative percentages</li><li>External processes and storage</li></ul>\r\nSecure email gateways are designed to surpass the traditional detection capabilities of legacy antivirus and anti-phishing solutions. To do so, they offer more sophisticated detection and prevention capabilities; secure email gateways can make use of threat intelligence to stay up-to-date with the latest threats.\r\nAdditionally, secure email gateways can sandbox suspicious emails, observing their behavior in a safe, enclosed environment that resembles the legitimate network. Security experts can then determine if it is a legitimate threat or a false positive.\r\nSecure email gateway solutions will often offer data loss prevention and email encryption capabilities to protect outgoing communications from prying and unscrupulous eyes.\r\nMuch like SIEM or endpoint detection and response (EDR), secure email gateways can produce false positives and false negatives, although they do tend to be far less than rates found in SIEM and EDR alerts.","materialsDescription":"<span style=\"font-weight: bold;\">How Does a Secure Email Gateway Work?</span>\r\nA secure email gateway offers a robust framework of technologies that protect against email-borne threats. It is effectively a firewall for your email, and scans both outbound and inbound email for any malicious content. At a minimum, most secure gateways offer a minimum of four security features: virus and malware blocking, spam filtering, content filtering and email archiving. Let's take a look at these features in more detail:\r\n<span style=\"font-style: italic;\"><span style=\"font-weight: bold;\">Virus and Malware Blocking</span></span>\r\nEmails infected with viruses or malware can make up approximately 1% of all email received by an organization. For a secure email gateway to effectively prevent these emails from reaching their intended recipients and delivering their payload, it must scan each email and be constantly kept up-to-date with the latest threat patterns and characteristics.\r\n<span style=\"font-style: italic;\"><span style=\"font-weight: bold;\">Spam Filtering</span></span>\r\nBelieve it or not, spam filtering is where the majority of a secure email gateway's processing power is focused. Spam is blocked in a number of different ways. Basic spam filtering usually involves a prefiltering technology that blocks or quarantines any emails received from known spammers. Spam filtering can also detect patterns commonly found in spam emails, such as preferred keywords used by spammers and the inclusion of links that could take the email recipient to a malicious site if clicked. Many email clients also allow users to flag spam messages that arrive in their mailbox and to block senders.\r\n<span style=\"font-style: italic;\"><span style=\"font-weight: bold;\">Content Filtering</span></span>\r\nContent filtering is typically applied to an outbound email sent by users within the company. For example, you can configure your secure email gateway to prevent specific sensitive documents from being sent to an external recipient, or put a block on image files or specific keywords within them being sent through the email system.\r\n<span style=\"font-style: italic;\"><span style=\"font-weight: bold;\">Email Archiving</span></span>\r\nEmail services, whether they are in the cloud or on-premise, need to be managed efficiently. Storage has been a problem for email administrators for many years, and while you may have almost infinite cloud storage available, email archiving can help to manage both user mailboxes and the efficiency of your systems. Compliance is also a major concern for many companies and email archiving is a must if you need to keep emails for a specific period of time.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Secure_Email_Gateway_Appliance.png","alias":"secure-e-mail-gateway-appliance"},"562":{"id":562,"title":"DDoS Protection - Appliance","description":"A denial-of-service attack (DoS attack) is a cyber-attack in which the perpetrator seeks to make a machine or network resource unavailable to its intended users by temporarily or indefinitely disrupting services of a host connected to the Internet. Denial of service is typically accomplished by flooding the targeted machine or resource with superfluous requests in an attempt to overload systems and prevent some or all legitimate requests from being fulfilled.\r\nIn a distributed denial-of-service attack (DDoS attack), the incoming traffic flooding the victim originates from many different sources. This effectively makes it impossible to stop the attack simply by blocking a single source.\r\nA DoS or DDoS attack is analogous to a group of people crowding the entry door of a shop, making it hard for legitimate customers to enter, disrupting trade.\r\nCriminal perpetrators of DoS attacks often target sites or services hosted on high-profile web servers such as banks or credit card payment gateways. Revenge, blackmail and activism can motivate these attacks.\r\nBuying a DDoS mitigation appliance can be highly confusing, especially if you have never done this before. While selecting a DDoS protection solution you must understand the right features and have proper background knowledge. In case of distributed denial of service attacks, the bandwidth or resources of any targeted network is flooded with a large amount of malicious traffic. As a result, the system becomes overloaded and crashes. The legitimate users of the network are denied the service. The mail servers, DNS servers and the servers which host high-profile websites are the main target of DDOS attacks. Customers who use services of any shared network are also affected by these attacks. Therefore, anti-DDOS appliances are now vital.","materialsDescription":"<span style=\"font-weight: bold;\">DDoS mitigation solution</span>\r\nThere are two types of DDoS mitigation appliances. These include software and hardware solutions. Identical functions may be claimed by both forms of DDoS protection.\r\n<ul><li>Firewalls are the most common protection appliance, which can deny protocols, IP addresses or ports. However, they are not enough strong to provide protection from the more complicated DDoS attacks.</li><li>Switches are also effective solutions for preventing DDoS attacks. Most of these switches possess rate limiting capability and ACL. Some switches provide packet inspection, traffic shaping, delayed binding and rate limiting. They can detect the fake traffic through balancing and rate filtering.</li><li>Like switches, routers also have rate limiting and ACL capability. Most routers are capable of moving under DoS attacks.</li><li>Intrusion prevention systems are another option for you when it comes to protection from DDoS attacks. This solution can be effective in several cases of DDoS attacks. It can identify DDoS attacks and stop them because they possess the granularity as well as processing power required for identifying the attacks. Then they work in an automated manner to resolve the situation.</li><li>There are also rate-based intrusion prevention mechanisms, which are capable of analyzing traffic granularity. This system can also monitor the pattern of traffic.</li></ul>\r\nYou must check the connectivity while selecting a DDoS mitigation appliance. Capacity is also an important aspect of a DDoS protection solutions. You must figure out the number of ports, IPs, protocols, hosts, URLs and user agents that can be monitored by the appliance. An effective DDoS mitigation solution must also be properly customizable. Your DDoS mitigation appliance should be such that it can be upgraded according to your requirements. These are some important factors that you need to consider while choosing a DDoS mitigation appliance for your system.<br /><br />","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_DDoS_Protection_Appliance.png","alias":"ddos-protection-appliance"},"689":{"id":689,"title":"Amazon Web Services","description":"Amazon Web Services (AWS) is a subsidiary of Amazon that provides on-demand cloud computing platforms to individuals, companies and governments, on a metered pay-as-you-go basis. In aggregate, these cloud computing web services provide a set of primitive, abstract technical infrastructure and distributed computing building blocks and tools. One of these services is Amazon Elastic Compute Cloud, which allows users to have at their disposal a virtual cluster of computers, available all the time, through the Internet. AWS's version of virtual computers emulate most of the attributes of a real computer including hardware (CPU(s) & GPU(s) for processing, local/RAM memory, hard-disk/SSD storage); a choice of operating systems; networking; and pre-loaded application software such as web servers, databases, CRM, etc.\r\nThe AWS technology is implemented at server farms throughout the world, and maintained by the Amazon subsidiary. Fees are based on a combination of usage, the hardware/OS/software/networking features chosen by the subscriber, required availability, redundancy, security, and service options. Subscribers can pay for a single virtual AWS computer, a dedicated physical computer, or clusters of either. As part of the subscription agreement, Amazon provides security for subscribers' system. AWS operates from many global geographical regions including 6 in North America.\r\nIn 2017, AWS comprised more than 90 services spanning a wide range including computing, storage, networking, database, analytics, application services, deployment, management, mobile, developer tools, and tools for the Internet of Things. The most popular include Amazon Elastic Compute Cloud (EC2) and Amazon Simple Storage Service (S3). Most services are not exposed directly to end users, but instead offer functionality through APIs for developers to use in their applications. Amazon Web Services' offerings are accessed over HTTP, using the REST architectural style and SOAP protocol.\r\nAmazon markets AWS to subscribers as a way of obtaining large scale computing capacity more quickly and cheaply than building an actual physical server farm. All services are billed based on usage, but each service measures usage in varying ways. As of 2017, AWS owns a dominant 34% of all cloud (IaaS, PaaS) while the next three competitors Microsoft, Google, and IBM have 11%, 8%, 6% respectively according to Synergy Group.","materialsDescription":"<span style=\"font-weight: bold;\">What is "Amazon Web Services" (AWS)?</span>\r\nWith Amazon Web Services (AWS), organizations can flexibly deploy storage space and computing capacity into Amazon's data centers without having to maintain their own hardware. A big advantage is that the infrastructure covers all dimensions for cloud computing. Whether it's video sharing, high-resolution photos, print data, or text documents, AWS can deliver IT resources on-demand, over the Internet, at a cost-per-use basis. The service exists since 2006 as a wholly owned subsidiary of Amazon Inc. The idea arose from the extensive experience with Amazon.com and the own need for platforms for web services in the cloud.\r\n<span style=\"font-weight: bold;\">What is Cloud Computing?</span>\r\nCloud Computing is a service that gives you access to expert-managed technology resources. The platform in the cloud provides the infrastructure (eg computing power, storage space) that does not have to be installed and configured in contrast to the hardware you have purchased yourself. Cloud computing only pays for the resources that are used. For example, a web shop can increase its computing power in the Christmas business and book less in "weak" months.\r\nAccess is via the Internet or VPN. There are no ongoing investment costs after the initial setup, but resources such as Virtual servers, databases or storage services are charged only after they have been used.\r\n<span style=\"font-weight: bold;\">Where is my data on Amazon AWS?</span>\r\nThere are currently eight Amazon Data Centers (AWS Regions) in different regions of the world. For each Amazon AWS resource, only the customer can decide where to use or store it. German customers typically use the data center in Ireland, which is governed by European law.\r\n<span style=\"font-weight: bold;\">How safe is my data on Amazon AWS?</span>\r\nThe customer data is stored in a highly secure infrastructure. Safety measures include, but are not limited to:\r\n<ul><li>Protection against DDos attacks (Distributed Denial of Service)</li><li>Defense against brute-force attacks on AWS accounts</li><li>Secure access: The access options are made via SSL.</li><li> Firewall: Output and access to the AWS data can be controlled.</li><li>Encrypted Data Storage: Data can be encrypted with Advanced Encryption Standard (AES) 256.</li><li>Certifications: Regular security review by independent certifications that AWS has undergone.</li></ul>\r\nEach Amazon data center (AWS region) consists of at least one Availability Zone. Availability Zones are stand-alone sub-sites that have been designed to be isolated from faults in other Availability Zones (independent power and data supply). Certain AWS resources, such as Database Services (RDS) or Storage Services (S3) automatically replicate your data within the AWS region to the different Availability Zones.\r\nAmazon AWS has appropriate certifications such as ISO27001 and has implemented a comprehensive security concept for the operation of its data center.\r\n<span style=\"font-weight: bold;\">Do I have to worry about hardware on Amazon AWS?</span>\r\nNo, all Amazon AWS resources are virtualized. Only Amazon takes care of the replacement and upgrade of hardware.\r\nNormally, you will not get anything out of defective hardware because defective storage media are exchanged by Amazon and since your data is stored multiple times redundantly, there is usually no problem either.\r\nIncidentally, if your chosen resources do not provide enough performance, you can easily get more CPU power from resources by just a few mouse clicks. You do not have to install anything new, just reboot your virtual machine or virtual database instance.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Amazon_Web_Services.png","alias":"amazon-web-services"},"695":{"id":695,"title":"Windows Server Administration","description":"","materialsDescription":"","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Windows_Server_Administration.png","alias":"windows-server-administration"},"697":{"id":697,"title":"Backup Administration","description":" Nowadays, information, along with human capital, is the most valuable asset of every enterprise. The backup system administration is an integral part of data and IT system security structure. It is the backup process quality and method that determine whether in the case of a system failure or data loss it will be possible to maintain functionality and continuity of the enterprise’s operations. This is why careful creation of backup copies is so important.\r\nCreating backup copies may be burdensome and very expensive and time-consuming when you do it all by yourself. On the other hand, the automation of the process introduces a range of improvements, saves time and eliminate the risk of data loss. The copies are created automatically and are protected against interference by third parties. The network administrator is capable of remote backup system management, validity monitoring of created copies as well as retrieving lost information.","materialsDescription":" <span style=\"font-weight: bold;\">The need for backup: when will help out the backup scheme?</span>\r\n<span style=\"font-weight: bold;\">Data corruption</span>\r\nThe need to create a backup is most obvious in the case when your data may undergo damage - physical destruction or theft of the carrier, virus attack, accidental and/or illegal changes, etc.\r\nA working backup plan will allow you to return your data in the event of any failure or accident without the cost and complexity.\r\n<span style=\"font-weight: bold;\">Copying information, creating mirrors</span>\r\nA less obvious option for using the backup scheme is to automatically create copies of data not for storage, but for use: cloning and mirroring databases, web sites, work projects, etc.\r\nThe backup scheme does not define what, where and why to copy - use backup as a cloning tool.\r\n<span style=\"font-weight: bold;\">Test, training and debugging projects</span>\r\nA special case of data cloning is the creation of a copy of working information in order to debug, improve or study its processing system. You can create a copy of your website or database using the backup instructions to make and debug any changes.\r\nThe need for backing up training and debugging versions of information is all the more high because the changes you make often lead to data loss.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Backup_Administration.png","alias":"backup-administration"},"713":{"id":713,"title":"IT Trainings","description":" IT Training is specific to the Information Technology (IT) industry, or to the skills necessary for performing information technology jobs. IT Training includes courses related to the application, design, development, implementation, support or management of computer-based information systems.\r\nThe IT training market is segmented into six broad market segments. Based on TrainingIndustry.com research, these segments reflect how IT training companies focus their suite of offerings and from which areas they derive most of their revenue.\r\n<ul><li>IT Infrastructure Training focuses on building, sustaining, and managing technical infrastructure.</li><li>Programming and Database Training involves database construction and management, programming language, and similar areas.</li><li>Enterprise Business Applications Training involves software applications that manage organizations’ processes, such as ERP, CRM, call center management, automated billing systems, etc.</li><li>Desktop Applications Training focuses on how to use programs and applications for desktop users.</li><li>Certification Training includes certifications, compliance, exam preparation, or boot camp style training programs.</li><li>Cyber Security Training involves courses and training programs centered on IT network and system security.</li></ul>","materialsDescription":" <span style=\"font-weight: bold;\">What is IT Training?</span>\r\nThe organized activity aimed at imparting information and/or instructions to improve the recipient's performance or to help him or her attain a required level of knowledge or skill in the IT-sphere.\r\n<span style=\"font-weight: bold;\">Who is an information technology (IT) trainer?</span>\r\nInformation technology trainers may teach IT administrative support staff or an organization's non-technical business users how to operate, configure, and maintain new technology. Employed either in-house as part of the IT department or by a technology vendor, the information technology trainer helps a company get the most value from its investment in an IT solution.\r\nAn information technology degree helps IT professionals build a foundation for a technical training career. In addition, IT trainers must stay up to date with evolving technology. IT certification programs such as MCSE certification allow trainers to build expertise in specific vendor technologies and systems components. According to the Bureau of Labor Statistics, training and development specialists in all fields earned a mean annual salary of $55,310 in 2009. Software publishing was among the top-paying industries for trainers, with a salary of $71,960.\r\n<span style=\"font-weight: bold;\">What is the target audience of IT Training?</span>\r\nStudents of IT training programs are predominately those who work in jobs related to computer science, network administration, information technology management, cloud computing, telecommunications, etc.\r\nGeneral business professionals and consumers who use IT applications, and computer and software products are other important audiences for IT training. IT training, more so than most other content segments of the training market, contains a substantial amount of business to consumer (B2C) training. Consumer training occurs when a student (or purchaser of a training program) completes the training on their own, without the recommendation, supervision, or support of an employer. This includes individuals aiming to improve their IT skill set or to gain certifications.\r\nThere is also a considerable amount of government spending in the IT training market, predominately in the area of cybersecurity.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_IT_Trainings.png","alias":"it-trainings"},"718":{"id":718,"title":"IT Consulting","description":" In management, information technology consulting (also called IT consulting, computer consultancy, business and technology services, computing consultancy, technology consulting, and IT advisory) as a field of activity focuses on advising organizations on how best to use information technology (IT) in achieving their business objectives.\r\nThe IT consulting industry can be viewed as a Four-tier system:\r\n<ul><li>Professional services firms which maintain large professional workforces and command high bill rates.</li><li>Staffing firms, which place technologists with businesses on a temporary basis, typically in response to employee absences, temporary skill shortages and technical projects.</li><li>Independent consultants, who are self-employed or who function as employees of staffing firms (for US tax purposes, employed on Form W-2), or as independent contractors in their own right (for US tax purposes, on "1099").</li><li>Information Technology security consultants</li></ul>\r\nThere are different reasons why consultants are called in:\r\n<ul><li>To gain external, objective advice and recommendations</li><li>To gain access to the consultants' specialized expertise</li><li>Temporary help during a one-time project where the hiring of a permanent employee(s) is not required or necessary</li><li>To outsource all or part of the IT services from a specific company.</li></ul>\r\nThere is a relatively unclear line between management consulting and IT consulting. There are sometimes overlaps between the two fields, but IT consultants often have degrees in computer science, electronics, technology, or management information systems while management consultants often have degrees in accounting, economics, Industrial Engineering, finance, or a generalized MBA (Masters in Business Administration).\r\nAccording to the Institute for Partner Education & Development, IT consultants' revenues come predominantly from design and planning based consulting with a mixture of IT and business consulting. This is different from a systems integrator in that you do not normally take title to product. Their value comes from their ability to integrate and support technologies as well as determining product and brands. ","materialsDescription":"<span style=\"font-weight: bold; \">Who is an information technology (IT) consultant?</span>\r\nAn information technology consultant is a third-party service provider who is qualified to advise clients on the best use of IT to meet specific business requirements. IT consultants may work with a professional IT consultancy firm or as independent contractors. They may conduct a business needs assessment and develop an information systems solution that meets the organization's objectives.\r\nSome information technology consultants emphasize technical issues while others help organizations use IT to manage business processes. Still others specialize in a specific IT area such as information security.\r\nIT consultants need a deep knowledge of both business and information technology. A bachelor's degree in management information systems, computer science, or information science is the typical path into a technical consultancy career. IT certifications supplement this foundation with specialized technical training. Information technology degree and certification programs are available online to accommodate working IT professionals.\r\n<span style=\"font-weight: bold; \">What are the prerequisites and major obstacles?</span>\r\nOnce a business owner defined the needs to take a business to the next level, a decision maker will define a scope, cost and a time-frame of the project. The role of the IT consultancy company is to support and nurture the company from the very beginning of the project until the end, and deliver the project not only in the scope, time and cost but also with complete customer satisfaction.\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">Project scoping and planning</span></span>\r\nThe usual problem is that a business owner doesn't know the detail of what the project is going to deliver until it starts the process. In many cases, the incremental effort in some projects can lead to significant financial loss.\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">Business process and system design</span></span>\r\nThe scope of a project is linked intimately to the proposed business processes and systems that the project is going to deliver. Regardless of whether the project is to launch a new product range or discontinue unprofitable parts of the business, the change will have some impact on business processes and systems. The documentation of your business processes and system requirements are as fundamental to project scoping as an architects plans would be to the costing and scoping of the construction of a building.\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">Project management support</span></span>\r\nThe most successful business projects are always those that are driven by an employee who has the authority, vision and influence to drive the required changes in a business. It is highly unlikely that a business owner (decision maker or similar) will realize the changes unless one has one of these people in the employment. However, the project leadership role typically requires significant experience and skills which are not usually found within a company focused on day-to-day operations. Due to this requirement within more significant business change projects/programs, outside expertise is often sought from firms which can bring this specific skill set to the company.\r\n<span style=\"font-weight: bold;\">What are the skills of IT-consulting?</span>\r\nAn IT consultant needs to possess the following skills:\r\n<ul><li>Advisory skills</li><li>Technical skills</li><li>Business skills</li><li>Communication skills</li><li>Management skills</li><li>Advisory language skills</li><li>Business and management language skills</li><li>Technical language skills</li></ul>","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_IT_Consulting.png","alias":"it-consulting"},"721":{"id":721,"title":"Business Consulting","description":"Business consulting is a type of services related to the preparation of recommendations for the achievement of set goals in economic activity.\r\nBusiness consulting may include not only consulting support but also the implementation of management decisions. A business consultant is included in the search for the best, optimal ways of getting out of a situation that requires outsourcing support.\r\nExperts in the field of business consulting are attracted if companies need external evaluation for further development or to develop a short-term and long-term strategy.\r\nThe result of the work is consulting on business consulting, as well as the development and forecasting of the company-employer.\r\nAs part of business consulting, the following services are usually provided:\r\n<ul><li>drawing up business plans;</li><li>вrawing up marketing plans;</li><li>marketing consulting.</li></ul>","materialsDescription":" Business consultants almost never use the word "problem"; instead, they talk about opportunities to enhance value. Ask any consultant what they do, and they'll likely say "I'm in the solutions business." Despite criticism that's sometimes leveled at business consultants, they truly can add value to your middle market company, but you need to know when and why to use them. There is a huge range of business issues that consultants can provide solutions for, and different types of consultants bring different ideas to the table.\r\nConsultants come in many forms. Most businesses are familiar with the "big four" audit firms: PricewaterhouseCoopers, Deloitte, Ernst & Young, and KPMG. These big-name firms are most likely out of a midmarket business's price range, which will lead midsized companies to work with smaller boutique firms and even individual experts for hire.\r\n<span style=\"font-weight: bold; \">Types of Consultants:</span>\r\nBusiness consultants can generally add value in five major areas of your middle market business:\r\n<ol><li><span style=\"font-weight: bold; \">Management and strategy.</span> Qualified consultants should have a deep understanding of your particular market and bring the best practices from your industry (or even other industries) to your company. If you're looking to expand your markets geographically, extend your product portfolio, reorganize your middle market company to promote efficiency and cost-effectiveness, buy out a smaller competitor, or increase your overall capabilities, then hiring an experienced management/strategy consultant can make perfect sense. Firms such as McKinsey & Company are famous for helping clients develop and execute better strategies.</li><li><span style=\"font-weight: bold; \">Operations.</span> Want to improve the quality and efficiency of your production processes? An operations consultant such as Accenture can help you create and implement a new way of doing just that. Some consultants specialize in business process re-engineering, meaning that they come in and map out your existing processes, analyze opportunities for reducing the number of steps in that process while maintaining quality, and re-engineer your processes in a way that reduces steps and costs. Other consultants are experts in quality control systems and can help you make changes that will reduce defects.</li><li><span style=\"font-weight: bold; \">IT.</span> This is a fast-growing area for consulting, as the demands of new technology are impacting middle market companies every day. Whether you need to develop a new system or integrate your old systems so that they work together, an IT consultant can help. IT consultants such as IBM will enhance your capabilities and also make your IT more flexible in meeting the dynamic needs of internal and external customers.</li><li><span style=\"font-weight: bold; \">HR.</span> Need to improve the overall satisfaction of your employees, recruit top talent, and retain your top performers? HR consultants such as Hay Group specialize in developing compensation strategies that align with your overall business goals, training, and developing your people in areas such as business communication and leadership. They can help you improve performance-related feedback and evaluation to your team, making your employees work smarter.</li><li><span style=\"font-weight: bold; \">Marketing.</span> Whether you need a new logo for your company, a new market position for one of your brands, or a new social media strategy to interact with your customers, marketing consultants can help. Consultants such as The Boston Consulting Group can offer you a creative spark when your own people have run out of ideas, letting you see what other companies have done to attract more customers.</li></ol>\r\n<span style=\"font-weight: bold;\">Reasons for Hiring a Consultant</span>\r\nNow that you know the major types of consultants, why would you need to hire one? Here are five common reasons:\r\n<ol><li><span style=\"font-weight: bold; \">Rent a brain.</span> You don't have the human resources you need because some internal person has quit or your head count has been slashed, so hiring a consultant for a project or on a temporary basis can fill the gap until a full-time internal person is found. You won't have to make a consultant a full-time employee, so breaking off the relationship is relatively easy and cost-effective.</li><li><span style=\"font-weight: bold; \">Manage change (and take the heat).</span> Consultants are experts at fostering change in organizations, so if your midsized company is rife with internal squabbling concerning imminent changes, bringing in a consultant can break the logjam. Consultants know that they're often brought in for political cover and will shoulder blame for unpopular changes such as reducing head count and other cost-cutting measures.</li><li><span style=\"font-weight: bold; \">Teach and implement best practices.</span> Consultants are often the leading experts in the fields they work in. They not only have academic and theoretical expertise, but they've also worked directly with leading companies to implement change. If you want best practices in areas such as IT and management, then consultants are the best source available. Why try to invent a best practice when consultants have already implemented some with multiple clients?</li><li><span style=\"font-weight: bold; \">Infuse creativity.</span> Consultants have a fresh perspective on your business, so having an outsider come in and offer ideas can be tremendously helpful. Sometimes your in-house people are too close to your company and don't have the perspective to examine the bigger picture within your market, but consultants can share valuable insights that boost your internal creative thinking.</li><li><span style=\"font-weight: bold; \">Deliver training.</span> You can hire a consultant to share knowledge about almost anything. Consultants are born trainers, so they're a natural choice to do a training course or day-long presentation for your company in almost any area. A good consultant blends theory and practice, and this can deliver high value to your midmarket company.</li></ol>\r\nConsultants can obviously be expensive, and you need to carefully weigh the costs and benefits. Only you know the particular needs of your midsized firm, but chances are that a consultant can help turn those needs into highly beneficial solutions.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Business_Consulting.png","alias":"business-consulting"},"731":{"id":731,"title":"IT Project Management","description":" IT project management is the process of planning, organizing and delineating responsibility for the completion of an organizations' specific information technology (IT) goals.\r\nIT project management includes overseeing projects for software development, hardware installations, network upgrades, cloud computing and virtualization rollouts, business analytics and data management projects and implementing IT services.\r\nIn addition to the normal problems that can cause a project to fail, factors that can negatively affect the success of an IT project include advances in technology during the project's execution, infrastructure changes that impact security and data management and unknown dependent relationships among hardware, software, network infrastructure and data. IT projects may also succumb to the first-time, first-use penalty which represents the total risk an organization assumes when implementing new technology for the first time. Because the technology hasn’t been implemented or used before in the organization, there are likely to be complications that will affect the project’s likelihood of success.","materialsDescription":" <span style=\"font-weight: bold;\">What is a Project?</span>\r\nA Project is an initiative launched to create a unique product or service. A Project has a defined start date and a defined end date. The start date represents when the project will be launched. The end date specifies when the project will be completed.\r\nA Project is not a reoccurring activity; but rather is a single effort to produce something new.\r\n<span style=\"font-weight: bold;\">What is Project Management?</span>\r\nProject Management is the collection and application of skills, knowledge, processes, and activities to meet a specific objective that may take the form of a product or service. Project Management is an integrated process of applying 5 major processes and their related activities throughout a project lifecycle: initiating, planning, executing, monitoring and Controlling, Closeout.\r\n<span style=\"font-weight: bold;\">What is a Project Management Methodology?</span>\r\nA Project Management Methodology is the overall approach (system) that will be followed to meet the project objectives.\r\n<span style=\"font-weight: bold;\">What are the characteristics of a project?</span>\r\nA Project has three characteristics:\r\n<ul><li>Temporal nature (Is not ongoing and has a definite start and end date.)</li><li>Unique Deliverable (Produces a new unique product or service that does not exist.)</li><li>Progressive (Actions follow a sequence or pattern and progress over time.)</li></ul>\r\n<span style=\"font-weight: bold;\">Who is responsible for the project?</span>\r\nThe Project Manager is directly responsible for the results of the project. He/She should use the necessary skills, knowledge, and tools to meet the project objectives. During the early phases of the project, the Project Manager, working with the project team, should be able to:\r\n<ul><li>Determine project goals and objectives</li><li>Determine assumptions and constraints</li><li>Define and validate product description</li><li>Determine project requirements</li><li>Define Project deliverables</li><li>Estimate and monitor project resource allocation</li></ul>","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_IT_Project_Management.png","alias":"it-project-management"},"733":{"id":733,"title":"Technical Support","description":" Technical support (often shortened to tech support) refers to services that entities provide to users of technology products or services. In general, technical support provide help regarding specific problems with a product or service, rather than providing training, provision or customization of product, or other support services. Most companies offer technical support for the services or products they sell, either included in the cost or for an additional fee. Technical support may be delivered over by phone, e-mail, live support software on a website, or other tool where users can log an incident. Larger organizations frequently have internal technical support available to their staff for computer-related problems. The Internet can also be a good source for freely available tech support, where experienced users help users find solutions to their problems. In addition, some fee-based service companies charge for premium technical support services.\r\nTechnical support may be delivered by different technologies depending on the situation. For example, direct questions can be addressed using telephone calls, SMS, Online chat, Support Forums, E-mail or Fax; basic software problems can be addressed over the telephone or, increasingly, by using remote access repair services; while more complicated problems with hardware may need to be dealt with in person.\r\nTechnical support is a range of services providing assistance with technology such as televisions, computers, and software, typically aiming to help the user with a specific problem.","materialsDescription":"<span style=\"font-weight: bold; \">What are the categories of technical support?</span>\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">Call in</span></span>\r\nThis type of technical support has been very common in the services industry.[citation needed] It is also known as "Time and Materials" (T&M) IT support.[citation needed] The customer pays for the materials (hard drive, memory, computer, digital devices, etc.) and also pays the technician based on the pre-negotiated rate when a problem occurs.\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">Block hours</span></span>\r\nBlock hours allow the client to purchase a number of hours upfront at an agreed price. While it is commonly used to offer a reduced hourly rate, it can also simply be a standard non-reduced rate, or represent a minimum fee charged to a client before providing service. The premise behind this type of support is that the customer has purchased a fixed number of hours to use either per month or year. This allows them the flexibility to use the hours as they please without doing the paperwork and the hassle of paying multiple bills.\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">Managed services</span></span>\r\nManaged services means a company will receive a list of well-defined services on an ongoing basis, with well-defined "response and resolution times" for a fixed rate or a flat fee. This can include things like 24/7 monitoring of servers, 24/7 help desk support for daily computer issues, and on-site visits by a technician when issues cannot be resolved remotely.[citation needed] Some companies also offer additional services like project management, backup and disaster recovery, and vendor management in the monthly price. The companies that offer this type of tech support are known as managed services providers.\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">Crowdsourced technical support</span></span>\r\nMany companies and organizations provide discussion boards for users of their products to interact; such forums allow companies to reduce their support costs without losing the benefit of customer feedback.\r\n<span style=\"font-weight: bold;\">What is outsourcing technical support?</span>\r\nWith the increasing use of technology in modern times, there is a growing requirement to provide technical support. Many organizations locate their technical support departments or call centers in countries or regions with lower costs. Dell was amongst the first companies to outsource their technical support and customer service departments to India in 2001. There has also been a growth in companies specializing in providing technical support to other organizations. These are often referred to as MSPs (Managed Service Providers).\r\nFor businesses needing to provide technical support, outsourcing allows them to maintain a high availability of service. Such need may result from peaks in call volumes during the day, periods of high activity due to introduction of new products or maintenance service packs, or the requirement to provide customers with a high level of service at a low cost to the business. For businesses needing technical support assets, outsourcing enables their core employees to focus more on their work in order to maintain productivity. It also enables them to utilize specialized personnel whose technical knowledge base and experience may exceed the scope of the business, thus providing a higher level of technical support to their employees.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Technical_Support.png","alias":"technical-support"},"735":{"id":735,"title":"Installation and configuration","description":" Installation or setup is the act of making the system or program ready for execution. Because the process varies for each program and each computer, programs (including operating systems) often come with an installer, a specialized program responsible for doing whatever is needed for their installation. The configuration is an arrangement of functional units according to their nature, number, and chief characteristics. Often, configuration pertains to the choice of hardware, software, firmware, settings, and documentation. The configuration affects system function and performance.\r\nSome computer programs can be executed by simply copying them into a folder stored on a computer and executing them. Other programs are supplied in a form unsuitable for immediate execution and therefore need an installation procedure. Once installed, the program can be executed again and again, without the need to reinstall before each execution.\r\nCommon operations performed during software installations include:\r\n<ul><li>Making sure that necessary system requirements are met</li><li>Checking for existing versions of the software</li><li>Creating or updating program files and folders</li><li>Adding configuration data such as configuration files, Windows registry entries or environment variables</li><li>Making the software accessible to the user, for instance by creating links, shortcuts or bookmarks</li><li>Configuring components that run automatically, such as daemons or Windows services</li><li>Performing product activation</li><li>Updating the software versions</li></ul>\r\nThese operations may require some charges or be free of charge. In case of payment, installation costs means the costs connected and relevant to or incurred as a result of installing the drivers or the equipment in the customers' premises. ","materialsDescription":"<span style=\"font-weight: bold;\">What does "Installation" mean?</span>\r\nInstallation is the process of making hardware and/or software ready for use. Obviously, different systems require different types of installations. While certain installations are simple and straightforward and can be performed by non-professionals, others are more complex and time-consuming and may require the involvement of specialists.\r\n<span style=\"font-weight: bold; \">What does the "Configuration" mean?</span>\r\nThe way a system is set up, or the assortment of components that make up the system. Configuration can refer to either hardware or software, or the combination of both. For instance, a typical configuration for a PC consists of 32MB (megabytes) main memory, a floppy drive, a hard disk, a modem, a CD-ROM drive, a VGA monitor, and the Windows operating system.\r\nMany software products require that the computer have a certain minimum configuration. For example, the software might require a graphics display monitor and a video adapter, a particular microprocessor, and a minimum amount of main memory.\r\nWhen you install a new device or program, you sometimes need to configure it, which means to set various switches and jumpers (for hardware) and to define values of parameters (for software). For example, the device or program may need to know what type of video adapter you have and what type of printer is connected to the computer. Thanks to new technologies, such as plug-and-play, much of this configuration is performed automatically.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Installation_and_configuration.png","alias":"installation-and-configuration"},"737":{"id":737,"title":"IT System Testing","description":" System testing is testing conducted on a complete integrated system to evaluate the system's compliance with its specified requirements.\r\nSystem testing takes, as its input, all of the integrated components that have passed integration testing. The purpose of integration testing is to detect any inconsistencies between the units that are integrated together (called assemblages). System testing seeks to detect defects both within the "inter-assemblages" and also within the system as a whole. The actual result is the behavior produced or observed when a component or system is tested.\r\nSystem testing is performed on the entire system in the context of either functional requirement specifications (FRS) or system requirement specification (SRS), or both. System testing tests not only the design but also the behavior and even the believed expectations of the customer. It is also intended to test up to and beyond the bounds defined in the software or hardware requirements specification(s).\r\nSoftware testing is an investigation conducted to provide stakeholders with information about the quality of the software product or service under test. Software testing can also provide an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation. Software testing involves the execution of a software component or system component to evaluate one or more properties of interest. In general, these properties indicate the extent to which the component or system under test meets the requirements that guided its design and development, responds correctly to all kinds of inputs, performs its functions within an acceptable time, is sufficiently usable, can be installed and run in its intended environments, and achieves the general result its stakeholders desire. As the number of possible tests for even simple software components is practically infinite, all software testing uses some strategy to select tests that are feasible for the available time and resources.\r\nMobile-device testing assures the quality of mobile devices, like mobile phones, PDAs, etc. The testing will be conducted on both hardware and software. And from the view of different procedures, the testing comprises R&D testing, factory testing and certification testing. Mobile-device testing involves a set of activities from monitoring and troubleshooting mobile applications, content and services on real handsets. Testing includes verification and validation of hardware devices and software applications.","materialsDescription":" <span style=\"font-weight: bold;\">What is System Testing?</span>\r\nSystem Testing is the testing of a complete and fully integrated software product. Usually, the software is only one element of a larger computer-based system. Ultimately, the software is interfaced with other software/hardware systems. System Testing is actually a series of different tests whose sole purpose is to exercise the full computer-based system.\r\nTwo Category of Software Testing:\r\n<ul><li>Black Box Testing;</li><li>White Box Testing.</li></ul>\r\nSystem test falls under the black box testing category of software testing.\r\nWhite box testing is the testing of the internal workings or code of a software application. In contrast, black box or System Testing is the opposite. The system test involves the external workings of the software from the user's perspective.\r\n<span style=\"font-weight: bold;\">What do you verify in System Testing?</span>\r\nSystem Testing involves testing the software code for following:\r\n<ul><li>Testing the fully integrated applications including external peripherals in order to check how components interact with one another and with the system as a whole. This is also called End to End testing scenario.</li><li>Verify thorough testing of every input in the application to check for desired outputs.</li><li>Testing of the user's experience with the application.</li></ul>\r\nThat is a very basic description of what is involved in system testing. You need to build detailed test cases and test suites that test each aspect of the application as seen from the outside without looking at the actual source code.\r\n<span style=\"font-weight: bold;\">What Types of System Testing Should Testers Use?</span>\r\nThere are over 50 different types of system testing. The specific types used by a tester depend on several variables. Those variables include:\r\n<ul><li><span style=\"font-weight: bold;\">Who the tester works for</span> - This is a major factor in determining the types of system testing a tester will use. Methods used by large companies are different than those used by medium and small companies.</li><li><span style=\"font-weight: bold;\">Time available for testing</span> - Ultimately, all 50 testing types could be used. Time is often what limits us to using only the types that are most relevant for the software project.</li><li><span style=\"font-weight: bold;\">Resources available to the tester</span> - Of course some testers will not have the necessary resources to conduct a testing type. For example, if you are a tester working for a large software development firm, you are likely to have expensive automated testing software not available to others.</li><li><span style=\"font-weight: bold;\">Software Tester's Education</span> - There is a certain learning curve for each type of software testing available. To use some of the software involved, a tester has to learn how to use it.</li><li><span style=\"font-weight: bold;\">Testing Budget</span> - Money becomes a factor not just for smaller companies and individual software developers but large companies as well.</li></ul>","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_IT_System_testing.png","alias":"it-system-testing"},"739":{"id":739,"title":"Deployment and Integration Services","description":" The number of various solutions implemented by customers today is quite large. Often, the subsystems of the seemingly unified IT landscape are either weakly connected with each other, or the interaction between them is established in the mode of transferring files and data by mail or “from hand to hand”.\r\nWestern IT vendors, following a certain trend, offer the customer complete and unified solutions. Such blocks of subsystems solve a specific task and form separate IT centers, which also require the mutual integration of infrastructures. This, oddly enough, is even more difficult, as a complete solution does not allow to penetrate deeply and get access to the required information or control subsystems.\r\nNevertheless, the integration and interconnection of information flows can significantly simplify business processes and lead to an increase in the efficiency of interaction both inside and outside the company (with customers and partners).\r\nThe integration task itself is important for business, as it provides a qualitatively new level of services. This is especially important for companies where IT is the immediate tool for achieving business goals. But it is equally important to make integration optimal in the light of minimizing not only the cost of purchasing equipment and software but also preserving previous IT investments.","materialsDescription":" <span style=\"font-weight: bold; \">The main types of implementation and integration services offered by companies:</span>\r\n<ul><li>Designing IT architecture for integration solutions in the field of analytics, automation and monitoring of business processes;</li><li>Development and integration of network infrastructure subsystems, including scalable telecommunications equipment, server equipment and workstations;</li><li>Defining a single platform and developing a solution for integrating enterprise applications, data and business processes;</li><li>Implementation and maintenance of integrated integration solutions in the field of enterprise management (ERP-systems);</li><li>Implementation and maintenance of integration solutions in the field of accounting and analysis of sales and customer relations (CRM-system);</li><li>Implementation and maintenance of integration solutions in the field of accounting and financial analysis;</li><li>Impairment, testing and development of solutions for ensuring information security of a business.</li></ul>","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Deployment_and_Integration_Services.png","alias":"deployment-and-integration-services"},"741":{"id":741,"title":"Proof of Concept","description":"Proof of concept (PoC) is a realization of a certain method or idea in order to demonstrate its feasibility, or a demonstration in principle with the aim of verifying that some concept or theory has practical potential. A proof of concept is usually small and may or may not be complete.\r\nProof of concept (POC) is used to test the idea of a certain technical feature or the general design of a product and prove that it is possible to apply those ideas.\r\nIt could be used to test something on just one part of the product before it is tried in practice with making a prototype.\r\nYou can think of this as a pre-prototype version of the product, but it is not even that since POC shouldn’t have all the features as the final product, not even as the prototype.\r\nThe main goal of POC is to prove that it is actually possible to develop that idea and include it as part of the final product.","materialsDescription":" <span style=\"font-weight: bold;\">What is a proof of concept?</span>\r\nProof of concept is the testing of the finished product based on the idea. Thus, this stage is the first phase in the design of the application. It explains how the project should work on the basis of a detailed description of requirements and specifications. The proof is the complete satisfaction of those functions that need to be realized. This approach makes it easier to hire developers for a startup in the future.\r\nIn order to confirm the concept in software development, it is necessary to determine the main tasks and perform the following steps:\r\n<ol><li>Identify project goals and methods for their implementation.</li><li>Receive feedback from users and customers.</li><li>Correct the idea and start implementing it.</li></ol>\r\n<span style=\"font-weight: bold;\">Project goals and methods of implementation</span>\r\nBefore you start, you need to understand what goal will perform a project. A web project can be a large marketplace or social network with unique features and a convenient solution. Also, it may be a CRM system and help the business to increase sales or improve the accounting of business resources. One way or another, each platform has a specific purpose.\r\nThe next step is to build methods of achieving the goal. At this stage, it is important not to delve into the details, but to evaluate common elements. How the project will work, what functions will be implemented, how the web application will interact with users, etc. It is very important to consider each item and write it down in the report. In fact, this is a small brainstorm. Typically, it takes from a few days to a couple of weeks. When the implementation plan is completed, you can begin to collect feedback from future users.\r\n<span style=\"font-weight: bold;\">Feedback from users and customers</span>\r\nWhen you have a ready document with a description of the project and the functions, then you need to get feedback from users or customers. Offer them your solution to a particular problem. Familiarize them with the implementation methods. You will receive many suggestions for improvement. At this point, some of your guesswork will be broken. It is important to listen and collect feedback. There is no need to hurry and change the concept or implement everything that future users are asking for. They don't have an expert evaluation and this is only their proposal.\r\n<span style=\"font-weight: bold;\">Idea correction and implementation</span>\r\nIt is at this stage that the final proof of the concept takes place. Having received feedback, you can clearly understand how users will interact with your project. What emotions it will cause. It is necessary to understand that this is a preliminary evaluation of the concept. Some recommendations may not have value, as others can significantly affect the further development. Thus, based on the information received, it is necessary to consider what can be changed to make the project more convenient. If you received a lot of negative feedback, it makes sense to stop the development process. Or at least think about a new improved version. So, if you really decided to start the development, we recommend starting the design with MVP. The minimal version will allow us to develop the project in the shortest possible time and check the idea on real users.\r\nProof of the concept is one of the important stages in the development of complex and expensive projects. It allows with high probability to determine the value of the project even before the begins development. Typically, the process takes from a few days to a couple of weeks. It gives a clear idea of how the project will work and what functions it will perform. If you approach the feedback analysis process with a clean head, this step in the future can save you money and time.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Proof_of_Concept.png","alias":"proof-of-concept"},"743":{"id":743,"title":"IT System documentation writing","description":" Without the development of technical documentation, it is impossible to create any complex technical solution. High-quality documentation, that is, informative, complete and understandable, is the key to the success of products at all stages of its life cycle. Properly written documentation is the basis of the functionality and effectiveness of information systems. It is with its use that the processes of creating databases, developing software, selecting and configuring network and server software are carried out.\r\nMany organizations at the initial stages of creating and implementing technical solutions do not pay enough attention to this factor, which often prevents the entry of a new product to the market.\r\nWriting documentation requires the contractor to have specific knowledge and skills, certain experience and considerable labor costs.\r\nThe main task of the working documentation is to give a complete picture of how the system is structured, what it consists of and how it functions.\r\nThere is no single standard for the development of this type of documentation. In most cases, its structure is selected for a specific situation. But you can take any algorithm that has already proven its effectiveness as the basis.","materialsDescription":"<span style=\"font-weight: bold; \">What is software documentation?</span>\r\nSoftware documentation - printed user manuals, online (online) documentation and help text describing how to use the software product.\r\n<span style=\"font-weight: bold; \">What is process documentation?</span>\r\nA process document outlines the steps necessary to complete a task or process. It is internal, ongoing documentation of the process while it is occurring—documentation cares more about the “how” of implementation than the “what” of process impact.\r\n<span style=\"font-weight: bold;\">What should be in the working documentation?</span>\r\nFirst of all, technical descriptions of implemented solutions. These are IT infrastructure diagrams, configuration descriptions, etc.\r\n<span style=\"font-weight: bold;\">What does well-written working documentation give?</span>\r\n<ul><li>systematizes data on IT infrastructure;</li><li>helps to understand the system architecture and functioning of connected services;</li><li>facilitates management decisions (for example, shows which service can be removed or replaced and how it will be displayed on the whole system);</li><li>makes it possible to comprehensively evaluate the selected IT structure and, also, timely notice the mistakes made or holes in the architecture.</li></ul>\r\n<span style=\"font-weight: bold;\">What are the key benefits of writing technical documentation?</span>\r\nThe development of documentation will allow you to:\r\n<ul><li>increase user satisfaction</li><li>reduce the load on the system administrator;</li><li>reduce system support costs.</li></ul>","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_IT_System_documentation_writing.png","alias":"it-system-documentation-writing"},"765":{"id":765,"title":"Network Packet Broker","description":" <span style=\"font-weight: bold; \">Network Packet Broker’s</span> (NPBs) are devices that do just what the name suggests, they “broker” incoming network traffic to any number of security, application performance monitoring, or network forensic tools. The need to “broker” packet before it is sent to tools comes from 2 major driving forces. First, the throughput of tools is limited, second, every tool requires a different subset of traffic to maximize performance.\r\nPacket broker is designed to deliver only the traffic of interest required by any specific tool. NPBs achieve this by using a variety of filtering options that will be explained in detail in the next blog in this series. NPBs act as the man-in-the-middle between TAP/SPAN ports and the tool itself and should be designed with 4 different deployment scenarios in mind.\r\n<span style=\"font-weight: bold; \">Broker traffic from a single TAP port to a single tool.</span> In this application the most important function of the NPB is its filtering capability. Most tools currently deployed handle up to 10Gbps of traffic at any given time. If the incoming TAP traffic is 40Gbps, the traffic needs to be filtered by a factor of 4. The NPB needs to ensure the traffic is filtered adequately to meet this limitation while providing every packet the tool needs to do its job.\r\n<span style=\"font-weight: bold; \">Broker traffic from multiple TAP ports to a single tool.</span> This application builds on the previous, but now the NPB needs to support aggregation. Aggregation allows the user to setup single filters that will be applied to all incoming traffic streams, reducing the setup time/complexity of the device. Aggregation also ensures the tool receives traffic from multiple streams.\r\n<span style=\"font-weight: bold; \">Broker traffic from a single TAP port to multiple tools.</span> This application builds on the first, however, the NPB now needs to be able to replicate and/or load balance traffic. The traffic needs to be replicated/mirrored/copied to ensure each tool has access to any necessary packets. To properly handle this application, the NPB must also support egress filtering, to allow unique filters criteria for each different tool. If multiple tools require the same filtered traffic, the NPB must also support load balancing and options on how to load balance. \r\n<span style=\"font-weight: bold; \">Broker Traffic from multiple TAP ports to multiple tools.</span> The final application builds on the previous three and uses filtering, aggregation and load balancing to guarantee each tool operates at its maximum efficiency.\r\nThe current crop of NPBs plays a critical role in enabling businesses to perform several functions, such as moving to a virtual network, upgrading the network, and cost-effectively adding more advanced tools. However, infrastructure evolution continues to march on, and now it’s time for <span style=\"font-weight: bold;\">next generation network packet broker</span>.\r\n<span style=\"font-weight: bold;\">Next-generation NPBs</span> are designed to meet the needs of digital businesses. A good analogy to consider is the evolution of application delivery controllers (ADCs). They started as simple load balancers and then added advanced load-balancing capabilities to become ADCs. After several years, security and cloud capabilities were introduced, and the product category shifted to advanced ADCs. The same trend is happening with NPBs as they evolve to next-generation NPBs.","materialsDescription":"<h1 class=\"align-center\"> Network Packet Brokers - How can they help you? </h1>\r\nAs your network continues to grow physically and virtually and speeds increase up 100 Gig it has become increasingly difficult to ensure that all your security and monitoring tools see and receive the real-time traffic that they need to analyze. These tools need to know exactly what is happening on the network, and are only as good as the data they receive.\r\nThe challenge is to ensure each tool see’s the traffic that it needs to. Using a combination of Taps, Bypass Switches and packet brokers we can set up a visibility architecture that sits between the IT infrastructure and the tools which gives you access to all the traffic traversing the virtual and physical links.\r\n<p class=\"align-center\"><span style=\"font-weight: bold;\">NPB USES</span></p>\r\n<ul><li>Data from one network link, to one tool</li><li>Data from one network link, to multiple tools – Regeneration</li><li>Data from multiple network links, to one tool - Aggregation</li><li>Data from multiple network links, to multiple tools</li><li>Load balance traffic among all your tools</li></ul>\r\n<p class=\"align-center\"><span style=\"font-weight: bold;\">HOW NPB's BENEFIT YOU</span></p>\r\n<p class=\"align-left\">Ultimately, NPBs make monitoring and security tools more effective, by giving them access to a range of data from across the entire network. Blind spots are reduced, giving tools the visibility they need to identify and tackle performance and security threats.</p>","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Network_Packet_Broker.png","alias":"network-packet-broker"},"784":{"id":784,"title":"NGFW - next-generation firewall - Appliance","description":" A next-generation firewall (NGFW) is a part of the third generation of firewall technology, combining a traditional firewall with other network device filtering functionalities, such as an application firewall using in-line deep packet inspection (DPI), an intrusion prevention system (IPS). Other techniques might also be employed, such as TLS/SSL encrypted traffic inspection, website filtering, QoS/bandwidth management, antivirus inspection and third-party identity management integration (i.e. LDAP, RADIUS, Active Directory).\r\nNGFWs include the typical functions of traditional firewalls such as packet filtering, network- and port-address translation (NAT), stateful inspection, and virtual private network (VPN) support. The goal of next-generation firewalls is to include more layers of the OSI model, improving filtering of network traffic that is dependent on the packet contents.\r\nNGFWs perform deeper inspection compared to stateful inspection performed by the first- and second-generation firewalls. NGFWs use a more thorough inspection style, checking packet payloads and matching signatures for harmful activities such as exploitable attacks and malware.\r\nImproved detection of encrypted applications and intrusion prevention service. Modern threats like web-based malware attacks, targeted attacks, application-layer attacks, and more have had a significantly negative effect on the threat landscape. In fact, more than 80% of all new malware and intrusion attempts are exploiting weaknesses in applications, as opposed to weaknesses in networking components and services.\r\nStateful firewalls with simple packet filtering capabilities were efficient blocking unwanted applications as most applications met the port-protocol expectations. Administrators could promptly prevent an unsafe application from being accessed by users by blocking the associated ports and protocols. But today, blocking a web application like Farmville that uses port 80 by closing the port would also mean complications with the entire HTTP protocol.\r\nProtection based on ports, protocols, IP addresses is no more reliable and viable. This has led to the development of identity-based security approach, which takes organizations a step ahead of conventional security appliances which bind security to IP-addresses.\r\nNGFWs offer administrators a deeper awareness of and control over individual applications, along with deeper inspection capabilities by the firewall. Administrators can create very granular "allow/deny" rules for controlling use of websites and applications in the network. ","materialsDescription":"<span style=\"font-weight: bold;\"> What is a next-generation firewall (NGFW)?</span>\r\nAn NGFW contains all the normal defences that a traditional firewall has as well as a type of intrusion prevention software and application control, alongside other bonus security features. NGFWs are also capable of deep packet inspection which enables more robust filters.\r\nIntrusion prevention software monitors network activity to detect and stop vulnerability exploits from occurring. This is usually done by monitoring for breaches against the network policies in place as a breach is usually indicative of malicious activity.\r\nApplication control software simply sets up a hard filter for programs that are trying to send or receive data over the Internet. This can either be done by blacklist (programs in the filter are blocked) or by whitelist (programs not in the filter are blocked).","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_NGFW.png","alias":"ngfw-next-generation-firewall-appliance"},"791":{"id":791,"title":"Vulnerability Scanner","description":" A <span style=\"font-weight: bold;\">vulnerability scanner</span> is a computer program designed to assess computers, network vulnerability or applications for known weaknesses. In plain words, these scanners are used to discover the weaknesses of a given system. They are utilized in the identification and detection of vulnerabilities arising from mis-configurations or flawed programming within a network-based asset such as a firewall, router, web server, application server, etc. They are typically available as SaaS (Software as a service); provided over the internet and delivered as a web application. \r\nMost vulnerability scanners will also attempt to log in to systems using default or other credentials in order to build a more detailed picture of the system. After building up an inventory, the vulnerability scanner checks each item in the inventory against one or more databases of known vulnerabilities to see if any items are subject to any of these vulnerabilities. The result of such scan is a systems vulnerability analysis, highlighting any that have known vulnerabilities that may need threat and vulnerability management.\r\n<span style=\"font-weight: bold;\">How vulnerability scanning works</span>. Vulnerability scanning finds systems and software that have known security vulnerabilities, but this information is only useful to IT security teams when it is used as the first part of a four-part vulnerability management process. <span style=\"font-weight: bold;\">Vulnerability management process involves:</span>\r\n<ul><li>Identification of vulnerabilities</li><li>Evaluation of the risk posed by any vulnerabilities identified</li><li>Treatment of any identified vulnerabilities</li><li>Reporting on vulnerabilities and how they have been handled</li></ul>\r\n<br /><span style=\"font-weight: bold;\">Types of vulnerability scans. </span>Not all vulnerability scans are alike, and to ensure compliance with certain regulations (such as those set by the PCI Security Standards Council) it is necessary to carry out two distinct types of vulnerability scans: an internal and an external vulnerability scan. \r\n<span style=\"font-weight: bold;\">External vulnerability scan.</span> As the name suggests, an external vulnerability scan is carried out from outside an organization's network, and its principal purpose is to detect vulnerabilities in the perimeter defenses such as open ports in the network firewall or specialized web application firewall. An external vulnerability scan can help organizations fix security issues that could enable hackers to gain access to the organization's network.\r\n<span style=\"font-weight: bold;\">Internal vulnerability scan. </span>By contrast, an internal vulnerability scan is carried out from inside an organization's perimeter defenses. Its purpose is to detect vulnerabilities that could be exploited by hackers who successfully penetrate the perimeter defenses, or equally by "insider threats" such as contractors or disgruntled employees who have legitimate access to parts of the network.\r\n<span style=\"font-weight: bold;\">Unauthenticated and authenticated vulnerability scans.</span> A similar but not always identical variation of internal and external vulnerability scans is the concept of unauthenticated and authenticated vulnerability scans. Unauthenticated scans, like external scans, search for weaknesses in the network perimeter, while authenticated scans provide vulnerability scanners with various privileged credentials, allowing them to probe the inside of the network for weak passwords, configuration issues, and misconfigured databases or applications.<br /><br />","materialsDescription":"<h1 class=\"align-center\">What is Vulnerability Assessment?</h1>\r\nVulnerability Assessment is also known as Vulnerability Testing, is a vulnerability scanning software performed to evaluate the security risks in the software system in order to reduce the probability of a threat. Vulnerability Analysis depends upon two mechanisms namely Vulnerability Assessment and Penetration Testing (VAPT).\r\n<p class=\"align-center\"><span style=\"font-weight: bold;\">Types of a vulnerability scanner:</span></p>\r\n<span style=\"font-weight: bold;\">Host Based. </span>Identifies the issues in the host or the system. The process is carried out by using host-based scanners and diagnose the vulnerabilities. The host-based tools will load a mediator software onto the target system; it will trace the event and report it to the security analyst.\r\n<span style=\"font-weight: bold;\">Network-Based.</span> It will detect the open port, and identify the unknown services running on these ports. Then it will disclose possible vulnerabilities associated with these services. This process is done by using Network-based Scanners.\r\n<span style=\"font-weight: bold;\">Database-Based.</span> It will identify the security exposure in the database systems using tools and techniques to prevent from SQL Injections. (SQL Injections: - Injecting SQL statements into the database by the malicious users, which can read the sensitive data's from a database and can update the data in the Database.)\r\n<h1 class=\"align-center\">How vulnerability scanners works?</h1>\r\nVulnerability scanning is an inspection of the potential points of exploit on a computer or network to identify security holes.\r\nA security scan detects and classifies system weaknesses in computers, networks and communications equipment and predicts the effectiveness of countermeasures. A scan may be performed by an organization’s IT department or a security service provide, possibly as a condition imposed by some authority. Vulnerability scans are also used by attackers looking for points of entry.\r\nA vulnerability scanner runs from the end point of the person inspecting the attack surface in question. The software compares details about the target attack surface to a database of information about known security holes in services and ports, anomalies in packet construction, and potential paths to exploitable programs or scripts. The scanner software attempts to exploit each vulnerability that is discovered.\r\nRunning a vulnerability scan can pose its own risks as it is inherently intrusive on the target machine’s running code. As a result, the scan can cause issues such as errors and reboots, reducing productivity.\r\n<h1 class=\"align-center\">How to choose the best vulnerability scanning tool?</h1>\r\nWhen researching vulnerability scanners, it's important to find out how they're rated for accuracy (the most important metric) as well as reliability, scalability and reporting. If accuracy is lacking, you'll end up running two different scanners, hoping that one picks up vulnerabilities that the other misses. This adds cost and effort to the scanning process. \r\n<span style=\"font-weight: bold;\">Software-Based Vulnerability Scanners.</span> These types of scanning products generally include configuration auditing, target profiling, penetration testing and detailed vulnerability analysis. They integrate with Windows products, such as Microsoft System Center, to provide intelligent patch management; some work with mobile device managers. They can scan not only physical network devices, servers and workstations, but extend to virtual machines, BYOD mobile devices and databases.\r\n<span style=\"font-weight: bold;\">Cloud-Based Vulnerability Scanners: </span>Continuous, On-Demand Monitoring. A newer type of vulnerability finder is delivered on-demand as Software as a Service (SaaS). Like software-based scanners, on-demand scanners incorporate links for downloading vendor patches and updates for identified vulnerabilities, reducing remediation effort. These services also include scanning thresholds to prevent overloading devices during the scanning process, which can cause devices to crash.\r\n<h1 class=\"align-center\">What is mobile application security scanner?</h1>\r\nMobile application security testing can help ensure there aren’t any loopholes in the software that may cause data loss. The sets of tests are meant to attack the app to identify possible threats and vulnerabilities that would allow external persons or systems to access private information stored on the mobile device. \r\nMobile application vulnerability scanner can help to ensure that applications are free from the flaws and weaknesses that hackers use to gain access to sensitive information. From backdoors, malicious code and other threats, these flaws may be present both in commercial and open source applications as well as software developed in-house.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Vulnerability_Scanner.png","alias":"vulnerability-scanner"},"793":{"id":793,"title":"Web Application Vulnerability Scanner","description":" A <span style=\"font-weight: bold; \">web application vulnerability scanner,</span> also known as a <span style=\"font-weight: bold; \">web application security scanner,</span> is an automated security tool. It scans web applications for malware, vulnerabilities, and logical flaws. Web application scanner use black box tests, as these tests do not require access to the source code but instead launch external attacks to test for security vulnerabilities. These simulated attacks can detect path traversal, cross-site scripting(XSS), and command injection.\r\nWeb app scanners are categorized as <span style=\"font-weight: bold; \">Dynamic Application Security Testing (DAST) tools.</span> DAST tools provide insight into how your web applications behave while they are in production, enabling your business to address potential vulnerabilities before a hacker uses them to stage an attack. As your web applications evolve, DAST solutions continue to scan them so that your business can promptly identify and remediate emerging issues before they develop into serious risks.\r\nWeb app vulnerability scanner first crawls the entire website, analyzing in-depth each file it finds, and displaying the entire website structure. After this discovery stage, it performs an automatic audit for common security vulnerabilities by launching a series of Web attacks. Web application scanners check for vulnerabilities on the Web server, proxy server, Web application server and even on other Web services. Unlike source code scanners, web application scanners don't have access to the source code and therefore detect vulnerabilities by actually performing attacks.\r\nA web application vulnerability assessment is very different than a general vulnerability assessment where security focus on networks and hosts. App vulnerability scanner scans ports, connect to services, and use other techniques to gather information revealing the patch levels, configurations, and potential exposures of our infrastructure.\r\nAutomated web application scanning tools help the user making sure the whole website is properly crawled, and that no input or parameter is left unchecked. Automated web vulnerability scanners also help in finding a high percentage of the technical vulnerabilities, and give you a very good overview of the website’s structure, and security status. \r\nThe best way to identify web application security threats is to perform web application vulnerability assessment. The importance of these threats could leave your organization exposed if they are not properly identified and mitigated. Therefore, implementing a web app security scanner solution should be of paramount importance for your organizations security plans in the future. \r\n\r\n","materialsDescription":"<h1 class=\"align-center\">Why Web Application Vulnerability Scanning is important?</h1>\r\nWeb applications are the technological base of modern companies. That’s why more and more businesses are betting on the development of this type of digital platforms. They stand out because they allow to automate processes, simplify tasks, be more efficient and offer a better service to the customer.<br /><br />The objective of web applications is that the user completes a task, be it buying, making a bank transaction, accessing e-mail, editing photos, texts, among many other things. In fact, they are very useful for an endless number of services, hence their popularity. Their disadvantages are few, but there is one that requires special attention: vulnerabilities.\r\n<p class=\"align-center\"><span style=\"font-weight: bold; \">Main web application security risks</span></p>\r\nA web vulnerability scanner tools will help you keep your services protected. However, it is important to be aware of the major security risks that exist so that both developers and security professionals are always alert and can find the most appropriate solutions in a timely manner.\r\n<ul><li><span style=\"font-weight: bold; \">Injection</span></li></ul>\r\nThis is a vulnerability that affects the application databases. They occur when unreliable data is sent to an interpreter by means of a command or query. The attacker may inject malicious code to disrupt the normal operation of the application by making it access the data without authorization or execute involuntary commands.\r\n<ul><li><span style=\"font-weight: bold; \">Authentication failures</span></li></ul>\r\nIf a vulnerability scan in web applications finds a failure, it may be due to loss of authentication. This is a critical vulnerability, as it allows the attacker to impersonate another user. This can compromise important data such as usernames, passwords, session tokens, and more.\r\n<ul><li><span style=\"font-weight: bold; \">Sensitive data exposure</span></li></ul>\r\nA serious risk is the exposure of sensitive data especially financial information such as credit cards or account numbers, personal data such as place of residence, or health-related information. If an attacker scans for this type of vulnerability, he or she may modify or steal this data and use it fraudulently. Therefore, it is essential to use a web app scanning tools to find vulnerabilities in web applications.<br /><br /><br />","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Web_Application_Vulnerability_Scanner.png","alias":"web-application-vulnerability-scanner"},"844":{"id":844,"title":"Data access","description":"Data access is a generic term referring to a process which has both an IT-specific meaning and other connotations involving access rights in a broader legal and/or political sense. Two fundamental categories of data access exist:\r\n<ul><li>sequential access (as in magnetic tape, for example)</li><li>random access (as in indexed media)</li></ul>\r\nThe <span style=\"font-weight: bold;\">sequential method</span> requires information to be moved within the disk using a seek operation until the data is located. Each segment of data has to be read one after another until the requested data is found. Reading data <span style=\"font-weight: bold;\">randomly </span>allows users to store or retrieve data anywhere on the disk, and the data is accessed in constant time.\r\nOftentimes when using random access, the data is split into multiple parts or pieces and located anywhere randomly on a disk. Sequential files are usually faster to load and retrieve because they require fewer seek operations.\r\nAccess data management crucially involves authorization to access different data repositories. Data access solutions can help distinguish the abilities of administrators and users. For example, administrators may have the ability to remove, edit and add data, while general users may not even have "read" rights if they lack access to particular information.\r\nA <span style=\"font-weight: bold;\">data access right</span> (DAR) is a permission that has been granted that allows a person or computer program to locate and read digital information at rest. Digital access rights play and important role in information security and compliance.\r\nIn compliance, DARs are often granted to data subjects by law. For example, under the General Data Protection Regulation (GDPR) in the European Union, a data subject has the right to access their own personal data and request a correction or erasure.\r\nTo avoid losing or corrupting corporate data, organizations should grant only the necessary required access to each user, a concept known as the <span style=\"font-weight: bold;\">principle of least privilege</span> (POLP). To ensure confidentiality, information should be used by authorized personnel only. To maintain data integrity, data should not be modified accidentally or voluntarily. Additionally, to provide data availability, the system should operate within the required levels of service.\r\nHistorically, each repository (including each different database, file system, etc.), might require the use of different methods and languages, and many of these repositories stored their content in different and incompatible formats.\r\nOver the years standardized languages, methods, and formats, have developed to serve as interfaces between the often proprietary, and always idiosyncratic, specific languages and methods. Such standards include SQL (1974- ), ODBC (ca 1990- ), JDBC, XQJ, ADO.NET, XML, XQuery, XPath (1999- ), and Web Services.\r\nSome of these standards enable translation of data from unstructured (such as HTML or free-text files) to structured (such as XML or SQL). Structures such as connection strings and DBURLs can attempt to standardise methods of connecting to databases.<br /><br /><br />","materialsDescription":"<h1 class=\"align-center\"><span style=\"font-weight: normal;\">What is a database?</span></h1>\r\nA database is a collection of related data which represents some aspect of the real world. A database system is designed to be built and populated with data for a certain task.\r\n<h1 class=\"align-center\"><span style=\"font-weight: normal;\">What is DBMS?</span></h1>\r\nDatabase Management System (also known as DBMS) is a access database software for storing and retrieving users' data by considering appropriate security measures. It allows users to create their own databases as per their requirement.\r\nIt consists of a group of programs which manipulate the database and provide an interface between the database. It includes the user of the database and other application programs.\r\nThe DBMS accepts the request for data from an application and instructs the operating system to provide the specific data. In large systems, a DBMS helps users and other third-party software to store and retrieve data.\r\n<h1 class=\"align-center\"><span style=\"font-weight: normal;\">What are the best data access rights practices?</span></h1>\r\nTo keep data access control issues from arising, the following practices are recommended:\r\n<ul><li>Company security policies should specify what employees can and cannot do on their computers. For example, will individual user data access to allow personal emails, file downloads, software installation, information ownership and authorized or unauthorized website access.</li><li>Data should be classified based on its degree of confidentiality (and the risks associated with being leaked) and criticality (the integrity and the risk of alteration or destruction).</li><li>Control to data should be established using required authorization or authentication and by employing traceability (which consists of tracking access to sensitive IT resources).</li><li>Regular detailed audits should be performed to help set up controls surrounding identity management, privileged users and access to resources.</li><li>The rights of users should be limited. For example, Windows 10 offers standard and administrator accounts, but most users should just have standard accounts to complete their daily tasks.</li></ul>\r\n\r\n","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Data_access.png","alias":"data-access"},"870":{"id":870,"title":"Cyber Security Training and Simulation","description":" Cyber security training and simulation is a powerful tool for CISOs and SOC managers to accurately simulate their network and security tools within a dynamic IT, or OT environment. A high-quality cyber range offers a rich catalog of simulated incident scenarios, in varying levels of difficulty, which security managers can choose from to train their teams. This opens up numerous new opportunities, several of which include: \r\n<ul><li>An environment for team training, where security staff can improve their communication and teamwork, both of which are critical elements of an efficient incident response team, and impossible to practice using conventional training systems.</li></ul>\r\n<ul><li>A means of training the entire organization in a breach scenario and the related business dilemmas, beyond incident response, including potential business executive decisions. Consider a ransomware scenario where executives are required to decide whether to pay the ransom, negotiate, or mitigate.</li></ul>\r\n<ul><li>A test-bed for potential products where they can be tested in a safe and controlled environment.</li></ul>\r\n<ul><li>A training environment for newly introduced products enabling team members to master new technologies and dramatically improve their performance and skills.</li></ul>\r\nCyber security training and simulation is the way to maximize the effectiveness of security training is by providing a virtual replica of your actual “warzone” resulting in a true-to-life experience. Security teams should use the actual security tools they use at work, and should experience their familiar network setup, and traffic. Threats should be simulated accurately, including advanced, evolving threats, targeted malware and ransomware.\r\nThe potential of simulation-based training, as compared to traditional training, is substantial. Organizations can not only train people but also test processes and technologies in a safe environment. Furthermore, security teams can train as individuals or as a group, to improve their teamwork. With the help of simulation, your team can experience high-fidelity threat scenarios while training, and improve their capabilities, rather that encountering these threats for the first time during the actual attack. This results in a dramatic improvement in their performance.","materialsDescription":" <span style=\"font-weight: bold; \">Why do you need to train cybersecurity employees?</span>\r\nNew threats and attack vectors emerge, spanning across a converged attack surface of IT and OT networks, as well as IoT devices. Attacks have become time-sensitive, requiring SOCs to detect and respond within seconds to minutes, and challenging the SOC’s ability to perform effectively.\r\nForward thinking CISOs now understand that rushing to spend their growing budgets to purchase the latest tools, hoping that the new technology will finally improve their security posture, will not resolve their strategic, and, in many cases, existential problems. They are beginning to acknowledge that their teams are not professionally equipped to face the new generation threats, not because of the lack of products or technologies, but because they don't really know how to operate them effectively. Most of them have never trained effectively, either as individuals or as a team, never faced a multi-stage attack, and have never used their technologies in a real-life attack scenario, requiring them to respond to an evolving attack within minutes. \r\nInvesting in our cyber experts and in our SOC teams, both as individuals, as well as a unified team, is THE key to an effective SOC. In the case of cybersecurity, this challenge is amplified. The shortage in cybersecurity professionals is at a critical state and will only continue to grow, forcing cybersecurity leaders to hire unexperienced team members to fill in open positions. Security analysts, often junior and barely trained, are expected to master dozens of security products in increasing numbers, defending against threats they have never experienced before. \r\n\r\n<span style=\"font-weight: bold; \">What is a cybersecurity simulation and why is it needed?</span>\r\n<span style=\"color: rgb(97, 97, 97); \">Traditional IT security training is largely ineffective, because it relies on sterile, mostly theoretical training. It is often conducted on the job by SOC team members rather than by professional instructors. To get our security teams prepared to face today’s multi-dimensional IT and OT security challenges, we must place them in a technology-driven environment that mirrors their own, facing real-life threats. In other words: hyper-realistic simulation. </span>\r\n<span style=\"color: rgb(97, 97, 97); \">Just as you would never send a pilot to combat before simulating emergency scenarios and potential combat situations, we should not send our cyber defenders to the field before enabling them to experience potential attacks and practicing response within a simulated environment.</span>\r\n<span style=\"color: rgb(97, 97, 97); \">A flight simulator replicates the actual combat zone, from realistic weather conditions, aircraft instruments to enemy aircraft attacks. This realism maximizes the impact of the training session. Similarly, the way to maximize the effectiveness of security training is by providing a virtual replica of your actual “warzone” resulting in a true-to-life experience. Security teams should use the actual security tools they use at work, and should experience their familiar network setup, and traffic. Threats should be simulated accurately, including advanced, evolving threats, targeted malware and ransomware.<br /></span>","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/education-training.png","alias":"cyber-security-training-and-simulation"},"876":{"id":876,"title":"Object Storage","description":"Object storage (also known as object-based storage) is a computer data storage architecture that manages data as objects, as opposed to other storage architectures like file systems which manages data as a file hierarchy, and block storage which manages data as blocks within sectors and tracks. Each object typically includes the data itself, a variable amount of metadata, and a globally unique identifier. Object storage can be implemented at multiple levels, including the device level (object-storage device), the system level, and the interface level. In each case, object storage seeks to enable capabilities not addressed by other storage architectures, like interfaces that can be directly programmable by the application, a namespace that can span multiple instances of physical hardware, and data-management functions like data replication and data distribution at object-level granularity.\r\nObject storage systems allow retention of massive amounts of unstructured data. Object storage is used for purposes such as storing photos on Facebook, songs on Spotify, or files in online collaboration services, such as Dropbox.\r\nObject storage is a method of data storage that emerged in the mid-1990s as researchers foresaw that existing storage methods would eventually start to show their limitations in certain scenarios. True to its name, object storage treats data as discrete units, or objects, that are accompanied by metadata and a universally unique identifier (UUID). This unstructured data resides in a flat (as opposed to tiered) address space called a storage pool. Object storage is also known for its compatibility with cloud computing, due to its unlimited scalability and faster data retrieval.\r\nToday, as data comes to underpin everything we do, the adoption of object storage systems has increased. It’s common in data centers and popular cloud-based platforms, such as Google cloud storage or Amazon cloud storage, and has become the de facto standard in several enterprise use cases.<br /><br />","materialsDescription":"<span style=\"font-weight: bold;\">What is Object Storage?</span>\r\nIn the modern world of cloud computing, object storage is the storage and retrieval of unstructured blobs of data and metadata using an HTTP API. Instead of breaking files down into blocks to store it on disk using a file system, we deal with whole objects stored over the network. These objects could be an image file, logs, HTML files, or any self-contained blob of bytes. They are unstructured because there is no specific schema or format they need to follow.<br />Object storage took off because it greatly simplified the developer experience. Because the API consists of standard HTTP requests, libraries were quickly developed for most programming languages. Saving a blob of data became as easy as an HTTP PUT request to the object store. Retrieving the file and metadata is a normal GET request. Further, most object storage services can also serve the files publicly to your users, removing the need to maintain a web server to host static assets.\r\nOn top of that, object storage services charge only for the storage space you use (some also charge per HTTP request, and for transfer bandwidth). This is a boon for small developers, who can get world-class storage and hosting of assets at costs that scale with use.\r\n<span style=\"font-weight: bold;\">What are the advantages of object storage?</span>\r\n<ul><li>A simple HTTP API, with clients available for all major operating systems and programming languages</li><li>A cost structure that means you only pay for what you use</li><li>Built-in public serving of static assets means one less server for you to run yourself</li><li>Some object stores offer built-in CDN integration, which caches your assets around the globe to make downloads and page loads faster for your users</li><li>Optional versioning means you can retrieve old versions of objects to recover from accidental overwrites of data</li><li>Object storage services can easily scale from modest needs to really intense use-cases without the developer having to launch more resources or rearchitect to handle the load</li><li>Using an object storage service means you don’t have to maintain hard drives and RAID arrays, as that’s handled by the service provider</li><li>Being able to store chunks of metadata alongside your data blob can further simplify your application architecture</li></ul>\r\n<span style=\"font-weight: bold;\">What are the disadvantages of object storage?</span>\r\n<ul><li>You can’t use object storage services to back a traditional database, due to the high latency of such services</li><li>Object storage doesn’t allow you to alter just a piece of a data blob, you must read and write an entire object at once. This has some performance implications. For instance, on a file system, you can easily append a single line to the end of a log file. On an object storage system, you’d need to retrieve the object, add the new line, and write the entire object back. This makes object storage less ideal for data that changes very frequently</li><li>Operating systems can’t easily mount an object store like a normal disk. There are some clients and adapters to help with this, but in general, using and browsing an object store is not as simple as flipping through directories in a file browser</li></ul>","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/jhghj.png","alias":"object-storage"},"899":{"id":899,"title":"IT System Security Services","description":"Today’s threat landscape is dynamic. The proliferation of disruptive technologies like mobile, social, cloud and big data has been increasingly impacting protection strategies. These technologies will continue to add to the complexity and drive the security needs of the IT infrastructure and information assets. They will also challenge integrity of current security controls and will risk enterprise data and intellectual property. Thus, it’s important that businesses have a strategy to deliver effective enterprise security risk management and situational awareness using defense-in-depth strategies, monitoring, analysis and reporting.\r\n<span style=\"font-weight: bold; \">IT System Security Services</span> ensures complete protection of your applications, products, and infrastructure against cyber threats, possible data leaks, thefts, or disasters. By reducing possible damages and providing full control over privacy and compliance, all your shared data, business intelligence, and other assets can be managed securely without risks. \r\n<span style=\"font-weight: bold; \">SecOps (Security + Operations)</span> is a movement created to facilitate collaboration between IT security and operations teams and integrate the technology and processes they use to keep systems and data secure — all in an effort to reduce risk and improve business agility. \r\nSecOps, formed from a combination of security and IT operations staff, is a highly skilled team focused on monitoring and assessing risk and protecting corporate assets, often operating from a security operations center, or SOC.\r\n<p class=\"align-center\"><span style=\"font-weight: bold; \">SecOps has the following business benefits and goals:</span></p>\r\n<ul><li>continuous protection;</li><li>a quick and effective response;</li><li>decreased costs of breaches and operations;</li><li>threat prevention;</li><li>security expertise;</li><li>compliance;</li><li>communication and collaboration; and</li><li>an improved business reputation.</li></ul>\r\n SecOps combines operations and security teams into one organization. Security is “shifting left”—instead of coming in at the end of the process, it is present at the beginning, when requirements are stated and systems are designed. Instead of having ops set up a system, then having security come in to secure it, systems are built from the get go with security in mind.\r\nSecOps has additional implications in organizations which practice DevOps—joining development and operations teams into one group with shared responsibility for IT systems. In this environment, SecOps involves even broader cooperation—between security, ops and software development teams. This is known as DevSecOps. It shifts security even further left—baking security into systems from the first iteration of development.","materialsDescription":"<h3 class=\"align-center\">What are the types of IT security? </h3>\r\n<ul><li><span style=\"font-weight: bold;\">Network security</span></li></ul>\r\nNetwork security is used to prevent unauthorized or malicious users from getting inside your network. This ensures that usability, reliability, and integrity are uncompromised. This type of security is necessary to prevent a hacker from accessing data inside the network. It also prevents them from negatively affecting your users’ ability to access or use the network.<br />Network security has become increasingly challenging as businesses increase the number of endpoints and migrate services to public cloud.\r\n<ul><li><span style=\"font-weight: bold;\">Internet security</span></li></ul>\r\nInternet security involves the protection of information that is sent and received in browsers, as well as network security involving web-based applications. These protections are designed to monitor incoming internet traffic for malware as well as unwanted traffic. This protection may come in the form of firewalls, antimalware, and antispyware.\r\n<ul><li><span style=\"font-weight: bold;\">Endpoint security</span></li></ul>\r\nEndpoint security provides protection at the device level. Devices that may be secured by endpoint security include cell phones, tablets, laptops, and desktop computers. Endpoint security will prevent your devices from accessing malicious networks that may be a threat to your organization. Advance malware protection and device management software are examples of endpoint security.\r\n<ul><li><span style=\"font-weight: bold;\">Cloud security</span></li></ul>\r\nApplications, data, and identities are moving to the cloud, meaning users are connecting directly to the Internet and are not protected by the traditional security stack. Cloud security can help secure the usage of software-as-a-service (SaaS) applications and the public cloud. A cloud-access security broker (CASB), secure Internet gateway (SIG), and cloud-based unified threat management (UTM) can be used for cloud security.\r\n<ul><li><span style=\"font-weight: bold;\">Application security</span></li></ul>\r\nWith application security, applications are specifically coded at the time of their creation to be as secure as possible, to help ensure they are not vulnerable to attacks. This added layer of security involves evaluating the code of an app and identifying the vulnerabilities that may exist within the software.\r\n<h3 class=\"align-center\"> SecOps vs SOC: What’s The Difference? </h3>\r\nSecurity operations can look vastly different from company to company, greatly varying in size and maturity. Whether security functions are a simple incident and management deviceor are full-fledged mission control centers with the highest levels of protection, each shares the same goal: to prevent, identify, and mitigate threats to the organization.\r\nSecurity Operations (SecOps) is the seamless collaboration between IT Security and IT Operations to effectively mitigate risk. SecOps team members assume joint responsibility and ownership for any security concerns, ensuring that security is infused into the entire operations cycle.<br />Historically, security and operations teams often had different and conflicting business goals. Operations teams were focused on setting up systems in a way that would meet performance and uptime goals. Security teams were focused on complying with regulatory requirements, putting defenses in place, and responding to security concerns.\r\nSecOps itself is a set of SOC processes, tools, and practices that helps enterprises meet their security goals more successfully and efficiently. However, the classic SOC is not compatible with the SecOps culture. In the past, the SOC would be completely isolated from the rest of the organization, performing their specific duties without much interaction with other parts of the business.<br />In today’s culture, many decision makers understand that this is no longer beneficial. Today, security must be a joint effort. It is crucial for organizations to embrace the idea of the modern SOC: one that promotes collaboration and communication between the operations and the security teams.\r\n<h3 class=\"align-center\"> What is the difference between IT security and information security (InfoSec)? </h3>\r\nAlthough IT security and information security sound similar, they do refer to different types of security. Information security refers to the processes and tools designed to protect sensitive business information from invasion, whereas IT security refers to securing digital data, through computer network security.<br /><br />","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/IT_security_system.png","alias":"it-system-security-services"}},"companyUrl":"https://www.elcoregroup.com/","countryCodes":["CHE","UKR"],"certifications":[],"isSeller":true,"isSupplier":true,"isVendor":false,"presenterCodeLng":"","seo":{"title":"ELCORE Group","keywords":"ELCORE, holding, warranty, effective, products, combination, customers, unique","description":"<p>The international holding ELCORE GROUP is established in 2006. The holding company openly and successfully operates offices in Moldova, Georgia, Uzbekistan, Tajikistan, Armenia, Ukraine, Kazakhstan, Azerbaijan, Türkiye, Mongolia. The distributor offers","og:title":"ELCORE Group","og:description":"<p>The international holding ELCORE GROUP is established in 2006. The holding company openly and successfully operates offices in Moldova, Georgia, Uzbekistan, Tajikistan, Armenia, Ukraine, Kazakhstan, Azerbaijan, Türkiye, Mongolia. The distributor offers","og:image":"https://old.roi4cio.com/uploads/roi/company/Elcore_simple_logo_1.png"},"eventUrl":"","vendorPartners":[{"vendor":"Oracle","partnershipLevel":"Distributor","countries":"","partnersType":""},{"vendor":"DELL","partnershipLevel":"Distributor","countries":"","partnersType":""},{"vendor":"Cisco","partnershipLevel":"Distributor","countries":"","partnersType":""},{"vendor":"Hewlett Packard Enterprise","partnershipLevel":"Distributor","countries":"","partnersType":""},{"vendor":"IBM","partnershipLevel":"Distributor","countries":"Republic of Belarus","partnersType":""},{"vendor":"Citrix","partnershipLevel":"Distributor","countries":"","partnersType":""},{"vendor":"HP Inc","partnershipLevel":"Distributor","countries":"","partnersType":""},{"vendor":"Lenovo","partnershipLevel":"Distributor","countries":"Republic of Belarus, Georgia","partnersType":""},{"vendor":"Dell EMC","partnershipLevel":"Distributor","countries":"","partnersType":""},{"vendor":"Avaya","partnershipLevel":"Distributor","countries":"Republic of Belarus, Georgia","partnersType":""},{"vendor":"Polycom","partnershipLevel":"Distributor","countries":"Republic of Belarus, Georgia","partnersType":""}],"supplierPartners":[],"vendoredProducts":[],"suppliedProducts":[{"id":4925,"logoURL":"https://old.roi4cio.com/fileadmin/user_upload/HP_Integrity_Superdome.png","logo":true,"scheme":false,"title":"HP Integrity Superdome","vendorVerified":0,"rating":"0.00","implementationsCount":1,"suppliersCount":0,"supplierPartnersCount":451,"alias":"hp-integrity-superdome","companyTitle":"Hewlett Packard Enterprise","companyTypes":["supplier","vendor"],"companyId":172,"companyAlias":"hewlett-packard-enterprise","description":"HP Superdome is the family of enterprise-class high-performance servers manufactured with both the PA-8900 processors (HP 9000 family) and Intel Itanium 2 processors (HP Integrity family). Superdome is represented by three models with 16, 32 and 64 processor sockets. Within the family, there is the possibility of moving from younger models to older ones, which ensures a reduction in initial costs, investment protection and the possibility of a phased increase in system performance. Superdome is a universal hierarchical crossbar architecture specifically designed to work with various types of processors. The main components of the architecture are Cells, Crossbar Backplane, and I / O subsystems.<br />\r\nThe cell board is the main unit of the Superdome system. It is a symmetrical multiprocessor (SMP) containing 4 processor sockets and up to 64 GB of main memory. It is possible (but not necessary) to connect to the cell its own I / O subsystem, which is an I / O basket with 12 PCI-X slots. Each cell can work in different configurations, i.e., be connected to other cells or form an independent independent server. In one system, cell boards can be combined with both PA-RISC processors and Itanium processors.<br />\r\nBackplane patch panels provide a non-blocking connection between cells, their associated memory, and I / O modules. The main principle underlying Superdome is the balanced performance of the system at all levels of the hierarchy in order to exclude the appearance of additional delays when the processors of one cell access the RAM located on other cells. The developed architecture allows the system to demonstrate record performance indicators for various types of tasks, such as operational transaction processing, technical calculations, processing of Internet transactions, analysis of large volumes of data, etc.<br />\r\nA single Superdome system can be logically divided into many hardware independent, software independent partitions, virtual machines, or resource partitions within a single server. Each hardware / software partition or virtual machine is running its own independent operating system. For cells with PA-RISC processors, the operating system is HP-UX 11i, and for cells with Itanium processors, HP-UX, Linux, Microsoft Windows 2003, and OpenVMS.\r\nTo implement effective system management and technical support, the Superdome server family includes:\r\n<ul><li>Event Monitoring System (EMS), an alert service that monitors the status of server hardware, including processors, memory, FC components, system buses, cache, system temperature, battery status, fans, power supplies.</li></ul>\r\n<ul><li>A hardware inventory service in Support Tools Manager (STM) that provides system inventory information, including serial numbers, part numbers, version levels, and so on.</li></ul>\r\n<ul><li>Support Management Station (SMS), which is used to start the process of scanning, diagnostics and testing the platform throughout the life cycle, including upgrades.</li></ul>\r\nThe Superdome family provides customers with investment protection and uptime thanks to a system infrastructure designed to upgrade to next-generation processors.","shortDescription":"HP Superdome is a premium server designed and manufactured by Hewlett Packard Enterprise.","type":null,"isRoiCalculatorAvaliable":false,"isConfiguratorAvaliable":false,"bonus":100,"usingCount":19,"sellingCount":13,"discontinued":0,"rebateForPoc":0,"rebate":0,"seo":{"title":"HP Integrity Superdome","keywords":"","description":"HP Superdome is the family of enterprise-class high-performance servers manufactured with both the PA-8900 processors (HP 9000 family) and Intel Itanium 2 processors (HP Integrity family). Superdome is represented by three models with 16, 32 and 64 processor s","og:title":"HP Integrity Superdome","og:description":"HP Superdome is the family of enterprise-class high-performance servers manufactured with both the PA-8900 processors (HP 9000 family) and Intel Itanium 2 processors (HP Integrity family). Superdome is represented by three models with 16, 32 and 64 processor s","og:image":"https://old.roi4cio.com/fileadmin/user_upload/HP_Integrity_Superdome.png"},"eventUrl":"","translationId":4926,"dealDetails":null,"roi":null,"price":null,"bonusForReference":null,"templateData":[],"testingArea":"","categories":[{"id":35,"title":"Server","alias":"server","description":"In computing, a server is a computer program or a device that provides functionality for other programs or devices, called "clients". This architecture is called the client–server model, and a single overall computation is distributed across multiple processes or devices. Servers can provide various functionalities, often called "services", such as sharing data or resources among multiple clients, or performing computation for a client. A single server can serve multiple clients, and a single client can use multiple servers. A client process may run on the same device or may connect over a network to a server on a different device. Typical servers are database servers, file servers, mail servers, print servers, web servers, game servers, and application servers.\r\nClient–server systems are today most frequently implemented by (and often identified with) the request–response model: a client sends a request to the server, which performs some action and sends a response back to the client, typically with a result or acknowledgement. Designating a computer as "server-class hardware" implies that it is specialized for running servers on it. This often implies that it is more powerful and reliable than standard personal computers, but alternatively, large computing clusters may be composed of many relatively simple, replaceable server components.\r\nStrictly speaking, the term server refers to a computer program or process (running program). Through metonymy, it refers to a device used for (or a device dedicated to) running one or several server programs. On a network, such a device is called a host. In addition to server, the words serve and service (as noun and as verb) are frequently used, though servicer and servant are not. The word service (noun) may refer to either the abstract form of functionality, e.g. Web service. Alternatively, it may refer to a computer program that turns a computer into a server, e.g. Windows service. Originally used as "servers serve users" (and "users use servers"), in the sense of "obey", today one often says that "servers serve data", in the same sense as "give". For instance, web servers "serve web pages to users" or "service their requests".\r\nThe server is part of the client–server model; in this model, a server serves data for clients. The nature of communication between a client and server is request and response. This is in contrast with peer-to-peer model in which the relationship is on-demand reciprocation. In principle, any computerized process that can be used or called by another process (particularly remotely, particularly to share a resource) is a server, and the calling process or processes is a client. Thus any general purpose computer connected to a network can host servers. For example, if files on a device are shared by some process, that process is a file server. Similarly, web server software can run on any capable computer, and so a laptop or a personal computer can host a web server.\r\nWhile request–response is the most common client–server design, there are others, such as the publish–subscribe pattern. In the publish–subscribe pattern, clients register with a pub–sub server, subscribing to specified types of messages; this initial registration may be done by request–response. Thereafter, the pub–sub server forwards matching messages to the clients without any further requests: the server pushes messages to the client, rather than the client pulling messages from the server as in request–response.","materialsDescription":" <span style=\"font-weight: bold;\">What is a server?</span>\r\nA server is a software or hardware device that accepts and responds to requests made over a network. The device that makes the request, and receives a response from the server, is called a client. On the Internet, the term "server" commonly refers to the computer system which receives a request for a web document and sends the requested information to the client.\r\n<span style=\"font-weight: bold;\">What are they used for?</span>\r\nServers are used to manage network resources. For example, a user may set up a server to control access to a network, send/receive an e-mail, manage print jobs, or host a website. They are also proficient at performing intense calculations. Some servers are committed to a specific task, often referred to as dedicated. However, many servers today are shared servers which can take on the responsibility of e-mail, DNS, FTP, and even multiple websites in the case of a web server.\r\n<span style=\"font-weight: bold;\">Why are servers always on?</span>\r\nBecause they are commonly used to deliver services that are constantly required, most servers are never turned off. Consequently, when servers fail, they can cause the network users and company many problems. To alleviate these issues, servers are commonly set up to be fault-tolerant.\r\n<span style=\"font-weight: bold;\">What are the examples of servers?</span>\r\nThe following list contains links to various server types:\r\n<ul><li>Application server;</li><li>Blade server;</li><li>Cloud server;</li><li>Database server;</li><li>Dedicated server;</li><li>Domain name service;</li><li>File server;</li><li>Mail server;</li><li>Print server;</li><li>Proxy server;</li><li>Standalone server;</li><li>Web server.</li></ul>\r\n<span style=\"font-weight: bold;\">How do other computers connect to a server?</span>\r\nWith a local network, the server connects to a router or switch that all other computers on the network use. Once connected to the network, other computers can access that server and its features. For example, with a web server, a user could connect to the server to view a website, search, and communicate with other users on the network.\r\nAn Internet server works the same way as a local network server, but on a much larger scale. The server is assigned an IP address by InterNIC, or by a web host.\r\nUsually, users connect to a server using its domain name, which is registered with a domain name registrar. When users connect to the domain name (such as "computerhope.com"), the name is automatically translated to the server's IP address by a DNS resolver.\r\nThe domain name makes it easier for users to connect to the server because the name is easier to remember than an IP address. Also, domain names enable the server operator to change the IP address of the server without disrupting the way that users access the server. The domain name can always remain the same, even if the IP address changes.\r\n<span style=\"font-weight: bold;\">Where are servers stored?</span>\r\nIn a business or corporate environment, a server and other network equipment are often stored in a closet or glasshouse. These areas help isolate sensitive computers and equipment from people who should not have access to them.\r\nServers that are remote or not hosted on-site are located in a data center. With these types of servers, the hardware is managed by another company and configured remotely by you or your company.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Server.png"}],"characteristics":[],"concurentProducts":[],"jobRoles":[],"organizationalFeatures":[],"complementaryCategories":[],"solutions":[],"materials":[],"useCases":[],"best_practices":[],"values":[],"implementations":[]},{"id":3396,"logoURL":"https://old.roi4cio.com/fileadmin/user_upload/dell_vxrail.jpg","logo":true,"scheme":false,"title":"Dell EMC VxRail","vendorVerified":0,"rating":"0.00","implementationsCount":0,"suppliersCount":0,"supplierPartnersCount":59,"alias":"dell-emc-vxrail","companyTitle":"Dell EMC","companyTypes":["vendor"],"companyId":955,"companyAlias":"dell-emc","description":"Whether you are accelerating data center modernization or deploying a hybrid cloud, VxRail delivers a turnkey experience that enables our customers to continuously innovate. The only fully integrated, pre-configured, and pre-tested VMware hyperconverged system on the market, VxRail transforms HCI networking and simplifies VMware cloud adoption, while meeting any HCI use case, including support for many of the most demanding workloads and applications.\r\nVxRail, powered by Dell EMC PowerEdge server platforms, features next-generation technology that provides future proofing for your infrastructure, including NVMe cache drives, SmartFabric Services supported by the Dell EMC PowerSwitch family, deep integration across the VMware ecosystem, advanced VMware hybrid cloud integration, and automated tools and guides to simplify deployment of a secure VxRail infrastructure.\r\n<ul><li>Consolidates compute, storage, and virtualization with end-to-end automated lifecycle management</li><li>Automates network setup and lifecycle management with SmartFabric Services, greatly accelerating deployment and simplifying operations</li><li>Delivers enterprise edge solutions with support for 2-node clusters</li><li>Provides a single point of support for all software and hardware</li><li>Offers smarter operations and infrastructure machine learning as part of the VxRail HCI System Software</li></ul>\r\n\r\n<span style=\"font-weight: bold;\">Benefits:</span>\r\n<span style=\"font-weight: bold; \">Dell Technologies Cloud Platform:</span> VMware Cloud Foundation on VxRail delivers full stack integration and simplified path to hybrid cloud that is future-proof for next generation VMware Cloud technologies.\r\n<span style=\"font-weight: bold; \">Jointly engineered:</span> Enables 2.5x faster time to value with synchronous availability of VMware core HCI and full stack HCI software with unique integration enabled by VxRail HCI System Software.\r\n<span style=\"font-weight: bold; \">Operational transparency:</span> 100% of VxRail value-added software capabilities and management available through VMware vCenter.\r\n<span style=\"font-weight: bold;\">Automated connectivity:</span> The first and only HCI appliance with network configuration automation reduces deployment and administration by 98%.","shortDescription":"Whether you are accelerating data center modernization or deploying a hybrid cloud, VxRail delivers a turnkey experience that enables our customers to continuously innovate.","type":null,"isRoiCalculatorAvaliable":false,"isConfiguratorAvaliable":false,"bonus":100,"usingCount":9,"sellingCount":20,"discontinued":0,"rebateForPoc":0,"rebate":0,"seo":{"title":"Dell EMC VxRail","keywords":"","description":"Whether you are accelerating data center modernization or deploying a hybrid cloud, VxRail delivers a turnkey experience that enables our customers to continuously innovate. The only fully integrated, pre-configured, and pre-tested VMware hyperconverged system","og:title":"Dell EMC VxRail","og:description":"Whether you are accelerating data center modernization or deploying a hybrid cloud, VxRail delivers a turnkey experience that enables our customers to continuously innovate. The only fully integrated, pre-configured, and pre-tested VMware hyperconverged system","og:image":"https://old.roi4cio.com/fileadmin/user_upload/dell_vxrail.jpg"},"eventUrl":"","translationId":3397,"dealDetails":null,"roi":null,"price":null,"bonusForReference":null,"templateData":[],"testingArea":"","categories":[{"id":697,"title":"Backup Administration","alias":"backup-administration","description":" Nowadays, information, along with human capital, is the most valuable asset of every enterprise. The backup system administration is an integral part of data and IT system security structure. It is the backup process quality and method that determine whether in the case of a system failure or data loss it will be possible to maintain functionality and continuity of the enterprise’s operations. This is why careful creation of backup copies is so important.\r\nCreating backup copies may be burdensome and very expensive and time-consuming when you do it all by yourself. On the other hand, the automation of the process introduces a range of improvements, saves time and eliminate the risk of data loss. The copies are created automatically and are protected against interference by third parties. The network administrator is capable of remote backup system management, validity monitoring of created copies as well as retrieving lost information.","materialsDescription":" <span style=\"font-weight: bold;\">The need for backup: when will help out the backup scheme?</span>\r\n<span style=\"font-weight: bold;\">Data corruption</span>\r\nThe need to create a backup is most obvious in the case when your data may undergo damage - physical destruction or theft of the carrier, virus attack, accidental and/or illegal changes, etc.\r\nA working backup plan will allow you to return your data in the event of any failure or accident without the cost and complexity.\r\n<span style=\"font-weight: bold;\">Copying information, creating mirrors</span>\r\nA less obvious option for using the backup scheme is to automatically create copies of data not for storage, but for use: cloning and mirroring databases, web sites, work projects, etc.\r\nThe backup scheme does not define what, where and why to copy - use backup as a cloning tool.\r\n<span style=\"font-weight: bold;\">Test, training and debugging projects</span>\r\nA special case of data cloning is the creation of a copy of working information in order to debug, improve or study its processing system. You can create a copy of your website or database using the backup instructions to make and debug any changes.\r\nThe need for backing up training and debugging versions of information is all the more high because the changes you make often lead to data loss.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Backup_Administration.png"},{"id":46,"title":"Data Protection and Recovery Software","alias":"data-protection-and-recovery-software","description":"Data protection and recovery software provide data backup, integrity and security for data backups and it enables timely, reliable and secure backup of data from a host device to destination device. Recently, Data Protection and Recovery Software market are disrupted by innovative technologies such as server virtualization, disk-based backup, and cloud services where emerging players are playing an important role. Tier one players such as IBM, Hewlett Packard Enterprise, EMC Corporation, Symantec Corporation and Microsoft Corporation are also moving towards these technologies through partnerships and acquisitions.\r\nThe major factor driving data protection and recovery software market is the high adoption of cloud-based services and technologies. Many organizations are moving towards the cloud to reduce their operational expenses and to provide real-time access to their employees. However, increased usage of the cloud has increased the risk of data loss and data theft and unauthorized access to confidential information, which increases the demand for data protection and recovery solution suites.","materialsDescription":" \r\n<span style=\"font-weight: bold; \">What is Data recovery?</span>\r\nData recovery is a process of salvaging (retrieving) inaccessible, lost, corrupted, damaged or formatted data from secondary storage, removable media or files, when the data stored in them cannot be accessed in a normal way. The data is most often salvaged from storage media such as internal or external hard disk drives (HDDs), solid-state drives (SSDs), USB flash drives, magnetic tapes, CDs, DVDs, RAID subsystems, and other electronic devices. Recovery may be required due to physical damage to the storage devices or logical damage to the file system that prevents it from being mounted by the host operating system (OS).\r\nThe most common data recovery scenario involves an operating system failure, malfunction of a storage device, logical failure of storage devices, accidental damage or deletion, etc. (typically, on a single-drive, single-partition, single-OS system), in which case the ultimate goal is simply to copy all important files from the damaged media to another new drive. This can be easily accomplished using a Live CD or DVD by booting directly from a ROM instead of the corrupted drive in question. Many Live CDs or DVDs provide a means to mount the system drive and backup drives or removable media, and to move the files from the system drive to the backup media with a file manager or optical disc authoring software. Such cases can often be mitigated by disk partitioning and consistently storing valuable data files (or copies of them) on a different partition from the replaceable OS system files.\r\nAnother scenario involves a drive-level failure, such as a compromised file system or drive partition, or a hard disk drive failure. In any of these cases, the data is not easily read from the media devices. Depending on the situation, solutions involve repairing the logical file system, partition table or master boot record, or updating the firmware or drive recovery techniques ranging from software-based recovery of corrupted data, hardware- and software-based recovery of damaged service areas (also known as the hard disk drive's "firmware"), to hardware replacement on a physically damaged drive which allows for extraction of data to a new drive. If a drive recovery is necessary, the drive itself has typically failed permanently, and the focus is rather on a one-time recovery, salvaging whatever data can be read.\r\nIn a third scenario, files have been accidentally "deleted" from a storage medium by the users. Typically, the contents of deleted files are not removed immediately from the physical drive; instead, references to them in the directory structure are removed, and thereafter space the deleted data occupy is made available for later data overwriting. In the mind of end users, deleted files cannot be discoverable through a standard file manager, but the deleted data still technically exists on the physical drive. In the meantime, the original file contents remain, often in a number of disconnected fragments, and may be recoverable if not overwritten by other data files.\r\nThe term "data recovery" is also used in the context of forensic applications or espionage, where data which have been encrypted or hidden, rather than damaged, are recovered. Sometimes data present in the computer gets encrypted or hidden due to reasons like virus attack which can only be recovered by some computer forensic experts.\r\n<span style=\"font-weight: bold;\">What is a backup?</span>\r\nA backup, or data backup, or the process of backing up, refers to the copying into an archive file of computer data that is already in secondary storage—so that it may be used to restore the original after a data loss event. The verb form is "back up" (a phrasal verb), whereas the noun and adjective form is "backup".\r\nBackups have two distinct purposes. The primary purpose is to recover data after its loss, be it by data deletion or corruption. Data loss can be a common experience of computer users; a 2008 survey found that 66% of respondents had lost files on their home PC. The secondary purpose of backups is to recover data from an earlier time, according to a user-defined data retention policy, typically configured within a backup application for how long copies of data are required. Though backups represent a simple form of disaster recovery and should be part of any disaster recovery plan, backups by themselves should not be considered a complete disaster recovery plan. One reason for this is that not all backup systems are able to reconstitute a computer system or other complex configuration such as a computer cluster, active directory server, or database server by simply restoring data from a backup.\r\nSince a backup system contains at least one copy of all data considered worth saving, the data storage requirements can be significant. Organizing this storage space and managing the backup process can be a complicated undertaking. A data repository model may be used to provide structure to the storage. Nowadays, there are many different types of data storage devices that are useful for making backups. There are also many different ways in which these devices can be arranged to provide geographic redundancy, data security, and portability.\r\nBefore data are sent to their storage locations, they are selected, extracted, and manipulated. Many different techniques have been developed to optimize the backup procedure. These include optimizations for dealing with open files and live data sources as well as compression, encryption, and de-duplication, among others. Every backup scheme should include dry runs that validate the reliability of the data being backed up. It is important to recognize the limitations and human factors involved in any backup scheme.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/Data_Protection_and_Recovery_Software__1_.png"},{"id":509,"title":"Converged and Hyper Converged System","alias":"converged-and-hyper-converged-system","description":" Converged and hyper convergent infrastructures simplify support for virtual desktop infrastructure and desktop virtualization, as they are designed to be easy to install and perform complex tasks.\r\nConvergent infrastructure combines the four main components of a data center in one package: computing devices, storage devices, network devices, and server virtualization tools. Hyper-converged infrastructure allows for tighter integration of a larger number of components using software tools.\r\nIn both convergent and hyper-convergent infrastructure, all elements are compatible with each other. Thanks to this, you will be able to purchase the necessary storage devices and network devices for your company at a time, and they, as you know, are of great importance in the virtual desktops infrastructure. This allows you to simplify the process of deploying such an infrastructure - something that has been waiting for and what will be rejoiced by many companies that need to virtualize their desktop systems.\r\nDespite its value and innovation, there are several questions to these technologies regarding their intended use and differences. Let's try to figure out what functionality offers converged and hyper-convergent infrastructures and how they differ.","materialsDescription":" <span style=\"font-weight: bold;\">What is converged infrastructure?</span>\r\nConvergent infrastructure combines computing devices, storage, network devices and server virtualization tools in one chassis so that they can be managed from one place. Management capabilities may include the management of virtual desktop infrastructure, depending on the selected configuration and manufacturer.\r\nThe hardware included in the bundled converged infrastructure is pre-configured to support any targets: virtual desktop infrastructures, databases, special applications, and so on. But in fact, you do not have enough freedom to change the selected configuration.\r\nRegardless of the method chosen for extending the virtual desktop infrastructure environment, you should understand that subsequent vertical scaling will be costly and time-consuming. Adding individual components is becoming complex and depriving you of the many benefits of a converged infrastructure. Adding workstations and expanding storage capacity in a corporate infrastructure can be just as expensive, which suggests the need for proper planning for any virtual desktop infrastructure deployment.\r\nOn the other hand, all components of a converged infrastructure can work for a long time. For example, a complete server of such infrastructure works well even without the rest of the infrastructure components.\r\n<span style=\"font-weight: bold;\">What is a hyper-convergent infrastructure?</span>\r\nThe hyper-converged infrastructure was built on the basis of converged infrastructure and the concept of a software-defined data center. It combines all the components of the usual data center in one system. All four key components of the converged infrastructure are in place, but sometimes it also includes additional components, such as backup software, snapshot capabilities, data deduplication functionality, intermediate compression, global network optimization (WAN), and much more. Convergent infrastructure relies primarily on hardware, and software-defined data center often adapts to any hardware. In the hyper-convergent infrastructure, these two possibilities are combined.\r\nHyper-converged infrastructure is supported by one supplier. It can be managed as a single system with a single set of tools. To expand the infrastructure, you just need to install blocks of necessary devices and resources (for example, storage) into the main system block. And this is done literally on the fly.\r\nSince the hyper-convergent infrastructure is software-defined (that is, the operation of the infrastructure is logically separated from the physical equipment), the mutual integration of components is denser than in a conventional converged infrastructure, and the components themselves must be nearby to work correctly. This makes it possible to use a hyper-convergent infrastructure to support even more workloads than in the case of conventional converged infrastructure. This is explained by the fact that it has the possibility of changing the principle of definition and adjustment at the program level. In addition, you can make it work with specialized applications and workloads, which pre-configured converged infrastructures do not allow.\r\nHyper-converged infrastructure is especially valuable for working with a virtual desktop infrastructure because it allows you to scale up quickly without additional costs. Often, in the case of the classic virtual desktops infrastructure, things are completely different - companies need to buy more resources before scaling or wait for virtual desktops to use the allocated space and network resources, and then, in fact, add new infrastructure.\r\nBoth scenarios require significant time and money. But, in the case of hyperconvergent infrastructure, if you need to expand the storage, you can simply install the required devices in the existing stack. Scaling can be done quickly — for the time required to deliver the equipment. In this case, you do not have to go through the full procedure of re-evaluation and reconfiguration of the corporate infrastructure.\r\nIn addition, when moving from physical PCs to virtual workstations, you will need devices to perform all the computational tasks that laptops and PCs typically perform. Hyper-converged infrastructure will greatly help with this, as it often comes bundled with a large amount of flash memory, which has a positive effect on the performance of virtual desktops. This increases the speed of I / O operations, smoothes work under high loads, and allows you to perform scanning for viruses and other types of monitoring in the background (without distracting users).\r\nThe flexibility of the hyper-converged infrastructure makes it more scalable and cost-effective compared to the convergent infrastructure since it has the ability to add computing and storage devices as needed. The cost of the initial investment for both infrastructures is high, but in the long term, the value of the investment should pay off.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Converged_and_Hyper_Converged_System.png"},{"id":5,"title":"Security Software","alias":"security-software","description":" Computer security software or cybersecurity software is any computer program designed to enhance information security. Security software is a broad term that encompasses a suite of different types of software that deliver data and computer and network security in various forms. \r\nSecurity software can protect a computer from viruses, malware, unauthorized users and other security exploits originating from the Internet. Different types of security software include anti-virus software, firewall software, network security software, Internet security software, malware/spamware removal and protection software, cryptographic software, and more.\r\nIn end-user computing environments, anti-spam and anti-virus security software is the most common type of software used, whereas enterprise users add a firewall and intrusion detection system on top of it. \r\nSecurity soft may be focused on preventing attacks from reaching their target, on limiting the damage attacks can cause if they reach their target and on tracking the damage that has been caused so that it can be repaired. As the nature of malicious code evolves, security software also evolves.<span style=\"font-weight: bold; \"></span>\r\n<span style=\"font-weight: bold; \">Firewall. </span>Firewall security software prevents unauthorized users from accessing a computer or network without restricting those who are authorized. Firewalls can be implemented with hardware or software. Some computer operating systems include software firewalls in the operating system itself. For example, Microsoft Windows has a built-in firewall. Routers and servers can include firewalls. There are also dedicated hardware firewalls that have no other function other than protecting a network from unauthorized access.\r\n<span style=\"font-weight: bold; \">Antivirus.</span> Antivirus solutions work to prevent malicious code from attacking a computer by recognizing the attack before it begins. But it is also designed to stop an attack in progress that could not be prevented, and to repair damage done by the attack once the attack abates. Antivirus software is useful because it addresses security issues in cases where attacks have made it past a firewall. New computer viruses appear daily, so antivirus and security software must be continuously updated to remain effective.\r\n<span style=\"font-weight: bold; \">Antispyware.</span> While antivirus software is designed to prevent malicious software from attacking, the goal of antispyware software is to prevent unauthorized software from stealing information that is on a computer or being processed through the computer. Since spyware does not need to attempt to damage data files or the operating system, it does not trigger antivirus software into action. However, antispyware software can recognize the particular actions spyware is taking by monitoring the communications between a computer and external message recipients. When communications occur that the user has not authorized, antispyware can notify the user and block further communications.\r\n<span style=\"font-weight: bold; \">Home Computers.</span> Home computers and some small businesses usually implement security software at the desktop level - meaning on the PC itself. This category of computer security and protection, sometimes referred to as end-point security, remains resident, or continuously operating, on the desktop. Because the software is running, it uses system resources, and can slow the computer's performance. However, because it operates in real time, it can react rapidly to attacks and seek to shut them down when they occur.\r\n<span style=\"font-weight: bold; \">Network Security.</span> When several computers are all on the same network, it's more cost-effective to implement security at the network level. Antivirus software can be installed on a server and then loaded automatically to each desktop. However firewalls are usually installed on a server or purchased as an independent device that is inserted into the network where the Internet connection comes in. All of the computers inside the network communicate unimpeded, but any data going in or out of the network over the Internet is filtered trough the firewall.<br /><br /><br />","materialsDescription":"<h1 class=\"align-center\"> <span style=\"font-weight: normal; \">What is IT security software?</span></h1>\r\nIT security software provides protection to businesses’ computer or network. It serves as a defense against unauthorized access and intrusion in such a system. It comes in various types, with many businesses and individuals already using some of them in one form or another.\r\nWith the emergence of more advanced technology, cybercriminals have also found more ways to get into the system of many organizations. Since more and more businesses are now relying their crucial operations on software products, the importance of security system software assurance must be taken seriously – now more than ever. Having reliable protection such as a security software programs is crucial to safeguard your computing environments and data. \r\n<p class=\"align-left\">It is not just the government or big corporations that become victims of cyber threats. In fact, small and medium-sized businesses have increasingly become targets of cybercrime over the past years. </p>\r\n<h1 class=\"align-center\"><span style=\"font-weight: normal; \">What are the features of IT security software?</span></h1>\r\n\r\n<ul><li><span style=\"font-weight: bold; \">Automatic updates. </span>This ensures you don’t miss any update and your system is the most up-to-date version to respond to the constantly emerging new cyber threats.</li></ul>\r\n\r\n<ul><li><span style=\"font-weight: bold; \">Real-time scanning.</span> Dynamic scanning features make it easier to detect and infiltrate malicious entities promptly. Without this feature, you’ll risk not being able to prevent damage to your system before it happens.</li></ul>\r\n\r\n<ul><li><span style=\"font-weight: bold; \">Auto-clean.</span> A feature that rids itself of viruses even without the user manually removing it from its quarantine zone upon detection. Unless you want the option to review the malware, there is no reason to keep the malicious software on your computer which makes this feature essential.</li></ul>\r\n\r\n<ul><li><span style=\"font-weight: bold; \">Multiple app protection.</span> This feature ensures all your apps and services are protected, whether they’re in email, instant messenger, and internet browsers, among others.</li></ul>\r\n\r\n<ul><li><span style=\"font-weight: bold; \">Application level security.</span> This enables you to control access to the application on a per-user role or per-user basis to guarantee only the right individuals can enter the appropriate applications.</li></ul>\r\n\r\n<ul><li><span style=\"font-weight: bold; \">Role-based menu.</span> This displays menu options showing different users according to their roles for easier assigning of access and control.</li></ul>\r\n\r\n<ul><li><span style=\"font-weight: bold; \">Row-level (multi-tenant) security.</span> This gives you control over data access at a row-level for a single application. This means you can allow multiple users to access the same application but you can control the data they are authorized to view.</li></ul>\r\n\r\n<ul><li><span style=\"font-weight: bold; \">Single sign-on.</span> A session or user authentication process that allows users to access multiple related applications as long as they are authorized in a single session by only logging in their name and password in a single place.</li></ul>\r\n\r\n<ul><li><span style=\"font-weight: bold; \">User privilege parameters.</span> These are customizable features and security as per individual user or role that can be accessed in their profile throughout every application.</li></ul>\r\n\r\n<ul><li><span style=\"font-weight: bold; \">Application activity auditing.</span> Vital for IT departments to quickly view when a user logged in and off and which application they accessed. Developers can log end-user activity using their sign-on/signoff activities.</li></ul>\r\n<p class=\"align-left\"><br /><br /><br /><br /></p>","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Security_Software.png"},{"id":35,"title":"Server","alias":"server","description":"In computing, a server is a computer program or a device that provides functionality for other programs or devices, called "clients". This architecture is called the client–server model, and a single overall computation is distributed across multiple processes or devices. Servers can provide various functionalities, often called "services", such as sharing data or resources among multiple clients, or performing computation for a client. A single server can serve multiple clients, and a single client can use multiple servers. A client process may run on the same device or may connect over a network to a server on a different device. Typical servers are database servers, file servers, mail servers, print servers, web servers, game servers, and application servers.\r\nClient–server systems are today most frequently implemented by (and often identified with) the request–response model: a client sends a request to the server, which performs some action and sends a response back to the client, typically with a result or acknowledgement. Designating a computer as "server-class hardware" implies that it is specialized for running servers on it. This often implies that it is more powerful and reliable than standard personal computers, but alternatively, large computing clusters may be composed of many relatively simple, replaceable server components.\r\nStrictly speaking, the term server refers to a computer program or process (running program). Through metonymy, it refers to a device used for (or a device dedicated to) running one or several server programs. On a network, such a device is called a host. In addition to server, the words serve and service (as noun and as verb) are frequently used, though servicer and servant are not. The word service (noun) may refer to either the abstract form of functionality, e.g. Web service. Alternatively, it may refer to a computer program that turns a computer into a server, e.g. Windows service. Originally used as "servers serve users" (and "users use servers"), in the sense of "obey", today one often says that "servers serve data", in the same sense as "give". For instance, web servers "serve web pages to users" or "service their requests".\r\nThe server is part of the client–server model; in this model, a server serves data for clients. The nature of communication between a client and server is request and response. This is in contrast with peer-to-peer model in which the relationship is on-demand reciprocation. In principle, any computerized process that can be used or called by another process (particularly remotely, particularly to share a resource) is a server, and the calling process or processes is a client. Thus any general purpose computer connected to a network can host servers. For example, if files on a device are shared by some process, that process is a file server. Similarly, web server software can run on any capable computer, and so a laptop or a personal computer can host a web server.\r\nWhile request–response is the most common client–server design, there are others, such as the publish–subscribe pattern. In the publish–subscribe pattern, clients register with a pub–sub server, subscribing to specified types of messages; this initial registration may be done by request–response. Thereafter, the pub–sub server forwards matching messages to the clients without any further requests: the server pushes messages to the client, rather than the client pulling messages from the server as in request–response.","materialsDescription":" <span style=\"font-weight: bold;\">What is a server?</span>\r\nA server is a software or hardware device that accepts and responds to requests made over a network. The device that makes the request, and receives a response from the server, is called a client. On the Internet, the term "server" commonly refers to the computer system which receives a request for a web document and sends the requested information to the client.\r\n<span style=\"font-weight: bold;\">What are they used for?</span>\r\nServers are used to manage network resources. For example, a user may set up a server to control access to a network, send/receive an e-mail, manage print jobs, or host a website. They are also proficient at performing intense calculations. Some servers are committed to a specific task, often referred to as dedicated. However, many servers today are shared servers which can take on the responsibility of e-mail, DNS, FTP, and even multiple websites in the case of a web server.\r\n<span style=\"font-weight: bold;\">Why are servers always on?</span>\r\nBecause they are commonly used to deliver services that are constantly required, most servers are never turned off. Consequently, when servers fail, they can cause the network users and company many problems. To alleviate these issues, servers are commonly set up to be fault-tolerant.\r\n<span style=\"font-weight: bold;\">What are the examples of servers?</span>\r\nThe following list contains links to various server types:\r\n<ul><li>Application server;</li><li>Blade server;</li><li>Cloud server;</li><li>Database server;</li><li>Dedicated server;</li><li>Domain name service;</li><li>File server;</li><li>Mail server;</li><li>Print server;</li><li>Proxy server;</li><li>Standalone server;</li><li>Web server.</li></ul>\r\n<span style=\"font-weight: bold;\">How do other computers connect to a server?</span>\r\nWith a local network, the server connects to a router or switch that all other computers on the network use. Once connected to the network, other computers can access that server and its features. For example, with a web server, a user could connect to the server to view a website, search, and communicate with other users on the network.\r\nAn Internet server works the same way as a local network server, but on a much larger scale. The server is assigned an IP address by InterNIC, or by a web host.\r\nUsually, users connect to a server using its domain name, which is registered with a domain name registrar. When users connect to the domain name (such as "computerhope.com"), the name is automatically translated to the server's IP address by a DNS resolver.\r\nThe domain name makes it easier for users to connect to the server because the name is easier to remember than an IP address. Also, domain names enable the server operator to change the IP address of the server without disrupting the way that users access the server. The domain name can always remain the same, even if the IP address changes.\r\n<span style=\"font-weight: bold;\">Where are servers stored?</span>\r\nIn a business or corporate environment, a server and other network equipment are often stored in a closet or glasshouse. These areas help isolate sensitive computers and equipment from people who should not have access to them.\r\nServers that are remote or not hosted on-site are located in a data center. With these types of servers, the hardware is managed by another company and configured remotely by you or your company.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Server.png"}],"characteristics":[],"concurentProducts":[],"jobRoles":[],"organizationalFeatures":[],"complementaryCategories":[],"solutions":[],"materials":[],"useCases":[],"best_practices":[],"values":[],"implementations":[]},{"id":4934,"logoURL":"https://old.roi4cio.com/fileadmin/user_upload/IBM_LOGO.png","logo":true,"scheme":false,"title":"IBM WebSphere Enterprise Service Bus (ESB)","vendorVerified":0,"rating":"0.00","implementationsCount":1,"suppliersCount":0,"supplierPartnersCount":100,"alias":"ibm-websphere-enterprise-service-bus-esb","companyTitle":"IBM","companyTypes":["supplier","vendor"],"companyId":177,"companyAlias":"ibm","description":"<span style=\"font-weight: bold; \">Features IBM WebSphere Enterprise Service Bus (ESB):</span><br />\r\n<ul><li>Brings consistency to point-to-point connectivity</li></ul>\r\n<ul><li>Provides smart connectivity on internet-standard application infrastructure, to connect any application or data</li></ul>\r\n<ul><li>Supports a broad range of native bindings and adapters for service-oriented integration, incl. web services, MQ and JMS messaging, HTTP, EJB, databases, files. file transfer, email, Lotus Domino, System i (RPG programs), CICS, IMS, SAP, Oracle, Siebel, PeopleSoft, JDEdwards.</li></ul>\r\n<ul><li>Integrates seamlessly with the industry-leading WebSphere software platform for streamlined IT operations</li></ul>\r\n<ul><li>Extends easily to IBM Business Process Manager Advanced for service orchestration and BPM</li></ul>\r\n<ul><li>Tightly integrates with WebSphere Service Registry and Repository for SOA solutions</li></ul>\r\n<ul><li>Leverages declarative IBM Integration Designer for visual programming and testing</li></ul>\r\n<ul><li>Provides an integrated solution for both service mediation and service hosting</li></ul>\r\n<ul><li>WebSphere ESB is easy to use from both a tools and a run-time perspective. IBM Integration Designer, the development tool of choice for WebSphere ESB, delivers an integrated, interactive, and visual development experience that requires minimal programming skills. You can get up and running quickly with a compelling out-of-the-box experience that is supported by easy-to-understand samples and comprehensive documentation.</li></ul>","shortDescription":"IBM WebSphere Enterprise Service Bus (ESB) provides fast and flexible application integration with smaller costs and opens an opportunity for use of methods of interaction of the next generation.","type":null,"isRoiCalculatorAvaliable":false,"isConfiguratorAvaliable":false,"bonus":100,"usingCount":11,"sellingCount":13,"discontinued":0,"rebateForPoc":0,"rebate":0,"seo":{"title":"IBM WebSphere Enterprise Service Bus (ESB)","keywords":"","description":"<span style=\"font-weight: bold; \">Features IBM WebSphere Enterprise Service Bus (ESB):</span><br />\r\n<ul><li>Brings consistency to point-to-point connectivity</li></ul>\r\n<ul><li>Provides smart connectivity on internet-standard application infrastructure, to co","og:title":"IBM WebSphere Enterprise Service Bus (ESB)","og:description":"<span style=\"font-weight: bold; \">Features IBM WebSphere Enterprise Service Bus (ESB):</span><br />\r\n<ul><li>Brings consistency to point-to-point connectivity</li></ul>\r\n<ul><li>Provides smart connectivity on internet-standard application infrastructure, to co","og:image":"https://old.roi4cio.com/fileadmin/user_upload/IBM_LOGO.png"},"eventUrl":"","translationId":4935,"dealDetails":null,"roi":null,"price":null,"bonusForReference":null,"templateData":[],"testingArea":"","categories":[{"id":401,"title":"Service-Oriented Architecture and Web Services","alias":"service-oriented-architecture-and-web-services","description":" Service-oriented architecture (SOA) is a style of software design where services are provided to the other components by application components, through a communication protocol over a network. An SOA service is a discrete unit of functionality that can be accessed remotely and acted upon and updated independently, such as retrieving a credit card statement online. SOA is also intended to be independent of vendors, products and technologies.\r\nA service has four properties according to one of many definitions of SOA:\r\n<ul><li>It logically represents a business activity with a specified outcome.</li><li>It is self-contained.</li><li>It is a black box for its consumers, meaning the consumer does not have to be aware of the service's inner workings.</li><li>It may consist of other underlying services.</li></ul>\r\nDifferent services can be used in conjunction to provide the functionality of a large software application,[4] a principle SOA shares with modular programming. Service-oriented architecture integrates distributed, separately maintained and deployed software components. It is enabled by technologies and standards that facilitate components' communication and cooperation over a network, especially over an IP network.\r\nSOA is related to the idea of an application programming interface (API), an interface or communication protocol between different parts of a computer program intended to simplify the implementation and maintenance of software. An API can be thought of as the service, and the SOA the architecture that allows the service to operate.","materialsDescription":" <span style=\"font-weight: bold;\">What is Service-Oriented Architecture?</span>\r\nService-oriented architecture (SOA) is a software architecture style that supports and distributes application components that incorporates discovery, data mapping, security and more. Service-oriented architecture has two main functions:\r\n<ol><li>Create an architectural model that defines goals of applications and methods that will help achieve those goals.</li><li>Define implementations specifications linked through WSDL (Web Services Description Language) and SOAP (Simple Object Access Protocol) specifications.</li></ol>\r\nService-oriented architecture principles are made up of nine main elements:\r\n<ol><li>Standardized Service Contract where services are defined making it easier for client applications to understand the purpose of the service.</li><li>Loose Coupling is a way to interconnecting components within the system or network so that the components can depend on one another to the least extent acceptable. When a service functionality or setting changes there is no downtime or breakage of the application running.</li><li>Service Abstraction hides the logic behind what the application is doing. It only relays to the client application what it is doing, not how it executes the action.</li><li>Service Reusability divides the services with the intent of reusing as much as possible to avoid spending resources on building the same code and configurations.</li><li>Service Autonomy ensures the logic of a task or a request is completed within the code.</li><li>Service Statelessness whereby services do not withhold information from one state to another in the client application.</li><li>Service Discoverability allows services to be discovered via a service registry.</li><li>Service Composability breaks down larger problems into smaller elements, segmenting the service into modules, making it more manageable.</li><li>Service Interoperability governs the use of standards (e.g. XML) to ensure larger usability and compatibility.</li></ol>\r\n<span style=\"font-weight: bold;\">How Does Service-Oriented Architecture Work?</span>\r\nA service-oriented architecture (SOA) works as a components provider of application services to other components over a network. Service-oriented architecture makes it easier for software components to work with each other over multiple networks.\r\nA service-oriented architecture is implemented with web services (based on WSDL and SOAP), to be more accessible over standard internet protocols that are on independent platforms and programming languages.\r\nService-oriented architecture has 3 major objectives all of which focus on parts of the application cycle:\r\n<ol><li>Structure process and software components as services – making it easier for software developers to create applications in a consistent way.</li><li>Provide a way to publish available services (functionality and input/output requirements) – allowing developers to easily incorporate them into applications.</li><li>Control the usage of these services for security purposes – mainly around the components within the architecture, and securing the connections between those components.</li></ol>\r\nMicroservices architecture software is largely an updated implementation of service-oriented architecture (SOA). The software components are created as services to be used via APIs ensuring security and best practices, just as in traditional service-oriented architectures.\r\n<span style=\"font-weight: bold;\">What are the benefits of Service-Oriented Architecture?</span>\r\nThe main benefits of service-oriented architecture solutions are:\r\n<ul><li>Extensibility – easily able to expand or add to it.</li><li>Reusability – opportunity to reuse multi-purpose logic.</li><li>Maintainability – the ability to keep it up to date without having to remake and build the architecture again with the same configurations.</li></ul>","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Service_Oriented_Architecture_and_Web_Services.png"}],"characteristics":[],"concurentProducts":[],"jobRoles":[],"organizationalFeatures":[],"complementaryCategories":[],"solutions":[],"materials":[],"useCases":[],"best_practices":[],"values":[],"implementations":[]},{"id":3400,"logoURL":"https://old.roi4cio.com/fileadmin/user_upload/hpe_simplivity.png","logo":true,"scheme":false,"title":"HPE SimpliVity","vendorVerified":0,"rating":"0.00","implementationsCount":0,"suppliersCount":0,"supplierPartnersCount":18,"alias":"hpe-simplivity","companyTitle":"HP Inc","companyTypes":["vendor"],"companyId":185,"companyAlias":"hp-inc","description":" HPE SimpliVity - innovative and scalable all in one virtualized solution that integrates software-defined compute, storage, and networking into a single, easy-to-manage platform.\r\n<span style=\"font-weight: bold; \">COMBINE. STREAMLINE. CONVERGE.</span>\r\nHyperconvergence means more than just merging storage and compute into a single solution. When the entire IT stack of multiple infrastructure components is combined into a software-defined platform, you can accomplish complex tasks in minutes instead of hours. Hyperconvergence gives you the agility and economics of cloud with the enterprise capabilities of on-premises infrastructure.\r\n<span style=\"font-weight: bold; \">Take control with Hyperconverged Infrastructure</span>\r\nSee the top reasons customers choose SimpliVity as an award-winning, high-performance solution for consolidating IT infrastructure, protecting data, and simplifying remote office IT.\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">Scale VDI resources and minimize user downtime</span></span>\r\nStore more VMs with HCI’s dedupe and compression capabilities. Data protection and replication get users back up and running faster with persistent desktops.\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">Centralize IT for ROBO operations</span></span>\r\nMultiple sites running disparate platforms equals a genuine hassle for IT. A common hyperconverged infrastructure brings order to administration, support, deployment, and data protection.\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">Protect and recover data</span></span>\r\nWith a distributed model that replicates data across multiple nodes, HCI’s built-in redundancy minimizes the impact of a lost node on your operations and simplifies disaster recovery.\r\n<span style=\"font-weight: bold; \">Architect’s Guide to HCI</span>\r\nLearn how to architect the hyperconverged data center, what resources to consolidate, and how to mitigate the perceived challenges of hyperconvergence.\r\n<span style=\"font-weight: bold;\">Administrator’s Guide to HCI</span>\r\nDesign the hyperconverged data center to address the pain points of data center metrics — like the relationship between performance and virtual machine density.","shortDescription":"HPE SimpliVity - an enterprise-grade hyperconverged platform that speeds application performance, improves efficiency and resiliency, and backs up/restores VMs in seconds.","type":null,"isRoiCalculatorAvaliable":false,"isConfiguratorAvaliable":false,"bonus":100,"usingCount":10,"sellingCount":0,"discontinued":0,"rebateForPoc":0,"rebate":0,"seo":{"title":"HPE SimpliVity","keywords":"","description":" HPE SimpliVity - innovative and scalable all in one virtualized solution that integrates software-defined compute, storage, and networking into a single, easy-to-manage platform.\r\n<span style=\"font-weight: bold; \">COMBINE. STREAMLINE. CONVERGE.</span>\r\nHyperc","og:title":"HPE SimpliVity","og:description":" HPE SimpliVity - innovative and scalable all in one virtualized solution that integrates software-defined compute, storage, and networking into a single, easy-to-manage platform.\r\n<span style=\"font-weight: bold; \">COMBINE. STREAMLINE. CONVERGE.</span>\r\nHyperc","og:image":"https://old.roi4cio.com/fileadmin/user_upload/hpe_simplivity.png"},"eventUrl":"","translationId":3401,"dealDetails":null,"roi":null,"price":null,"bonusForReference":null,"templateData":[],"testingArea":"","categories":[{"id":697,"title":"Backup Administration","alias":"backup-administration","description":" Nowadays, information, along with human capital, is the most valuable asset of every enterprise. The backup system administration is an integral part of data and IT system security structure. It is the backup process quality and method that determine whether in the case of a system failure or data loss it will be possible to maintain functionality and continuity of the enterprise’s operations. This is why careful creation of backup copies is so important.\r\nCreating backup copies may be burdensome and very expensive and time-consuming when you do it all by yourself. On the other hand, the automation of the process introduces a range of improvements, saves time and eliminate the risk of data loss. The copies are created automatically and are protected against interference by third parties. The network administrator is capable of remote backup system management, validity monitoring of created copies as well as retrieving lost information.","materialsDescription":" <span style=\"font-weight: bold;\">The need for backup: when will help out the backup scheme?</span>\r\n<span style=\"font-weight: bold;\">Data corruption</span>\r\nThe need to create a backup is most obvious in the case when your data may undergo damage - physical destruction or theft of the carrier, virus attack, accidental and/or illegal changes, etc.\r\nA working backup plan will allow you to return your data in the event of any failure or accident without the cost and complexity.\r\n<span style=\"font-weight: bold;\">Copying information, creating mirrors</span>\r\nA less obvious option for using the backup scheme is to automatically create copies of data not for storage, but for use: cloning and mirroring databases, web sites, work projects, etc.\r\nThe backup scheme does not define what, where and why to copy - use backup as a cloning tool.\r\n<span style=\"font-weight: bold;\">Test, training and debugging projects</span>\r\nA special case of data cloning is the creation of a copy of working information in order to debug, improve or study its processing system. You can create a copy of your website or database using the backup instructions to make and debug any changes.\r\nThe need for backing up training and debugging versions of information is all the more high because the changes you make often lead to data loss.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Backup_Administration.png"},{"id":46,"title":"Data Protection and Recovery Software","alias":"data-protection-and-recovery-software","description":"Data protection and recovery software provide data backup, integrity and security for data backups and it enables timely, reliable and secure backup of data from a host device to destination device. Recently, Data Protection and Recovery Software market are disrupted by innovative technologies such as server virtualization, disk-based backup, and cloud services where emerging players are playing an important role. Tier one players such as IBM, Hewlett Packard Enterprise, EMC Corporation, Symantec Corporation and Microsoft Corporation are also moving towards these technologies through partnerships and acquisitions.\r\nThe major factor driving data protection and recovery software market is the high adoption of cloud-based services and technologies. Many organizations are moving towards the cloud to reduce their operational expenses and to provide real-time access to their employees. However, increased usage of the cloud has increased the risk of data loss and data theft and unauthorized access to confidential information, which increases the demand for data protection and recovery solution suites.","materialsDescription":" \r\n<span style=\"font-weight: bold; \">What is Data recovery?</span>\r\nData recovery is a process of salvaging (retrieving) inaccessible, lost, corrupted, damaged or formatted data from secondary storage, removable media or files, when the data stored in them cannot be accessed in a normal way. The data is most often salvaged from storage media such as internal or external hard disk drives (HDDs), solid-state drives (SSDs), USB flash drives, magnetic tapes, CDs, DVDs, RAID subsystems, and other electronic devices. Recovery may be required due to physical damage to the storage devices or logical damage to the file system that prevents it from being mounted by the host operating system (OS).\r\nThe most common data recovery scenario involves an operating system failure, malfunction of a storage device, logical failure of storage devices, accidental damage or deletion, etc. (typically, on a single-drive, single-partition, single-OS system), in which case the ultimate goal is simply to copy all important files from the damaged media to another new drive. This can be easily accomplished using a Live CD or DVD by booting directly from a ROM instead of the corrupted drive in question. Many Live CDs or DVDs provide a means to mount the system drive and backup drives or removable media, and to move the files from the system drive to the backup media with a file manager or optical disc authoring software. Such cases can often be mitigated by disk partitioning and consistently storing valuable data files (or copies of them) on a different partition from the replaceable OS system files.\r\nAnother scenario involves a drive-level failure, such as a compromised file system or drive partition, or a hard disk drive failure. In any of these cases, the data is not easily read from the media devices. Depending on the situation, solutions involve repairing the logical file system, partition table or master boot record, or updating the firmware or drive recovery techniques ranging from software-based recovery of corrupted data, hardware- and software-based recovery of damaged service areas (also known as the hard disk drive's "firmware"), to hardware replacement on a physically damaged drive which allows for extraction of data to a new drive. If a drive recovery is necessary, the drive itself has typically failed permanently, and the focus is rather on a one-time recovery, salvaging whatever data can be read.\r\nIn a third scenario, files have been accidentally "deleted" from a storage medium by the users. Typically, the contents of deleted files are not removed immediately from the physical drive; instead, references to them in the directory structure are removed, and thereafter space the deleted data occupy is made available for later data overwriting. In the mind of end users, deleted files cannot be discoverable through a standard file manager, but the deleted data still technically exists on the physical drive. In the meantime, the original file contents remain, often in a number of disconnected fragments, and may be recoverable if not overwritten by other data files.\r\nThe term "data recovery" is also used in the context of forensic applications or espionage, where data which have been encrypted or hidden, rather than damaged, are recovered. Sometimes data present in the computer gets encrypted or hidden due to reasons like virus attack which can only be recovered by some computer forensic experts.\r\n<span style=\"font-weight: bold;\">What is a backup?</span>\r\nA backup, or data backup, or the process of backing up, refers to the copying into an archive file of computer data that is already in secondary storage—so that it may be used to restore the original after a data loss event. The verb form is "back up" (a phrasal verb), whereas the noun and adjective form is "backup".\r\nBackups have two distinct purposes. The primary purpose is to recover data after its loss, be it by data deletion or corruption. Data loss can be a common experience of computer users; a 2008 survey found that 66% of respondents had lost files on their home PC. The secondary purpose of backups is to recover data from an earlier time, according to a user-defined data retention policy, typically configured within a backup application for how long copies of data are required. Though backups represent a simple form of disaster recovery and should be part of any disaster recovery plan, backups by themselves should not be considered a complete disaster recovery plan. One reason for this is that not all backup systems are able to reconstitute a computer system or other complex configuration such as a computer cluster, active directory server, or database server by simply restoring data from a backup.\r\nSince a backup system contains at least one copy of all data considered worth saving, the data storage requirements can be significant. Organizing this storage space and managing the backup process can be a complicated undertaking. A data repository model may be used to provide structure to the storage. Nowadays, there are many different types of data storage devices that are useful for making backups. There are also many different ways in which these devices can be arranged to provide geographic redundancy, data security, and portability.\r\nBefore data are sent to their storage locations, they are selected, extracted, and manipulated. Many different techniques have been developed to optimize the backup procedure. These include optimizations for dealing with open files and live data sources as well as compression, encryption, and de-duplication, among others. Every backup scheme should include dry runs that validate the reliability of the data being backed up. It is important to recognize the limitations and human factors involved in any backup scheme.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/Data_Protection_and_Recovery_Software__1_.png"},{"id":509,"title":"Converged and Hyper Converged System","alias":"converged-and-hyper-converged-system","description":" Converged and hyper convergent infrastructures simplify support for virtual desktop infrastructure and desktop virtualization, as they are designed to be easy to install and perform complex tasks.\r\nConvergent infrastructure combines the four main components of a data center in one package: computing devices, storage devices, network devices, and server virtualization tools. Hyper-converged infrastructure allows for tighter integration of a larger number of components using software tools.\r\nIn both convergent and hyper-convergent infrastructure, all elements are compatible with each other. Thanks to this, you will be able to purchase the necessary storage devices and network devices for your company at a time, and they, as you know, are of great importance in the virtual desktops infrastructure. This allows you to simplify the process of deploying such an infrastructure - something that has been waiting for and what will be rejoiced by many companies that need to virtualize their desktop systems.\r\nDespite its value and innovation, there are several questions to these technologies regarding their intended use and differences. Let's try to figure out what functionality offers converged and hyper-convergent infrastructures and how they differ.","materialsDescription":" <span style=\"font-weight: bold;\">What is converged infrastructure?</span>\r\nConvergent infrastructure combines computing devices, storage, network devices and server virtualization tools in one chassis so that they can be managed from one place. Management capabilities may include the management of virtual desktop infrastructure, depending on the selected configuration and manufacturer.\r\nThe hardware included in the bundled converged infrastructure is pre-configured to support any targets: virtual desktop infrastructures, databases, special applications, and so on. But in fact, you do not have enough freedom to change the selected configuration.\r\nRegardless of the method chosen for extending the virtual desktop infrastructure environment, you should understand that subsequent vertical scaling will be costly and time-consuming. Adding individual components is becoming complex and depriving you of the many benefits of a converged infrastructure. Adding workstations and expanding storage capacity in a corporate infrastructure can be just as expensive, which suggests the need for proper planning for any virtual desktop infrastructure deployment.\r\nOn the other hand, all components of a converged infrastructure can work for a long time. For example, a complete server of such infrastructure works well even without the rest of the infrastructure components.\r\n<span style=\"font-weight: bold;\">What is a hyper-convergent infrastructure?</span>\r\nThe hyper-converged infrastructure was built on the basis of converged infrastructure and the concept of a software-defined data center. It combines all the components of the usual data center in one system. All four key components of the converged infrastructure are in place, but sometimes it also includes additional components, such as backup software, snapshot capabilities, data deduplication functionality, intermediate compression, global network optimization (WAN), and much more. Convergent infrastructure relies primarily on hardware, and software-defined data center often adapts to any hardware. In the hyper-convergent infrastructure, these two possibilities are combined.\r\nHyper-converged infrastructure is supported by one supplier. It can be managed as a single system with a single set of tools. To expand the infrastructure, you just need to install blocks of necessary devices and resources (for example, storage) into the main system block. And this is done literally on the fly.\r\nSince the hyper-convergent infrastructure is software-defined (that is, the operation of the infrastructure is logically separated from the physical equipment), the mutual integration of components is denser than in a conventional converged infrastructure, and the components themselves must be nearby to work correctly. This makes it possible to use a hyper-convergent infrastructure to support even more workloads than in the case of conventional converged infrastructure. This is explained by the fact that it has the possibility of changing the principle of definition and adjustment at the program level. In addition, you can make it work with specialized applications and workloads, which pre-configured converged infrastructures do not allow.\r\nHyper-converged infrastructure is especially valuable for working with a virtual desktop infrastructure because it allows you to scale up quickly without additional costs. Often, in the case of the classic virtual desktops infrastructure, things are completely different - companies need to buy more resources before scaling or wait for virtual desktops to use the allocated space and network resources, and then, in fact, add new infrastructure.\r\nBoth scenarios require significant time and money. But, in the case of hyperconvergent infrastructure, if you need to expand the storage, you can simply install the required devices in the existing stack. Scaling can be done quickly — for the time required to deliver the equipment. In this case, you do not have to go through the full procedure of re-evaluation and reconfiguration of the corporate infrastructure.\r\nIn addition, when moving from physical PCs to virtual workstations, you will need devices to perform all the computational tasks that laptops and PCs typically perform. Hyper-converged infrastructure will greatly help with this, as it often comes bundled with a large amount of flash memory, which has a positive effect on the performance of virtual desktops. This increases the speed of I / O operations, smoothes work under high loads, and allows you to perform scanning for viruses and other types of monitoring in the background (without distracting users).\r\nThe flexibility of the hyper-converged infrastructure makes it more scalable and cost-effective compared to the convergent infrastructure since it has the ability to add computing and storage devices as needed. The cost of the initial investment for both infrastructures is high, but in the long term, the value of the investment should pay off.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Converged_and_Hyper_Converged_System.png"},{"id":35,"title":"Server","alias":"server","description":"In computing, a server is a computer program or a device that provides functionality for other programs or devices, called "clients". This architecture is called the client–server model, and a single overall computation is distributed across multiple processes or devices. Servers can provide various functionalities, often called "services", such as sharing data or resources among multiple clients, or performing computation for a client. A single server can serve multiple clients, and a single client can use multiple servers. A client process may run on the same device or may connect over a network to a server on a different device. Typical servers are database servers, file servers, mail servers, print servers, web servers, game servers, and application servers.\r\nClient–server systems are today most frequently implemented by (and often identified with) the request–response model: a client sends a request to the server, which performs some action and sends a response back to the client, typically with a result or acknowledgement. Designating a computer as "server-class hardware" implies that it is specialized for running servers on it. This often implies that it is more powerful and reliable than standard personal computers, but alternatively, large computing clusters may be composed of many relatively simple, replaceable server components.\r\nStrictly speaking, the term server refers to a computer program or process (running program). Through metonymy, it refers to a device used for (or a device dedicated to) running one or several server programs. On a network, such a device is called a host. In addition to server, the words serve and service (as noun and as verb) are frequently used, though servicer and servant are not. The word service (noun) may refer to either the abstract form of functionality, e.g. Web service. Alternatively, it may refer to a computer program that turns a computer into a server, e.g. Windows service. Originally used as "servers serve users" (and "users use servers"), in the sense of "obey", today one often says that "servers serve data", in the same sense as "give". For instance, web servers "serve web pages to users" or "service their requests".\r\nThe server is part of the client–server model; in this model, a server serves data for clients. The nature of communication between a client and server is request and response. This is in contrast with peer-to-peer model in which the relationship is on-demand reciprocation. In principle, any computerized process that can be used or called by another process (particularly remotely, particularly to share a resource) is a server, and the calling process or processes is a client. Thus any general purpose computer connected to a network can host servers. For example, if files on a device are shared by some process, that process is a file server. Similarly, web server software can run on any capable computer, and so a laptop or a personal computer can host a web server.\r\nWhile request–response is the most common client–server design, there are others, such as the publish–subscribe pattern. In the publish–subscribe pattern, clients register with a pub–sub server, subscribing to specified types of messages; this initial registration may be done by request–response. Thereafter, the pub–sub server forwards matching messages to the clients without any further requests: the server pushes messages to the client, rather than the client pulling messages from the server as in request–response.","materialsDescription":" <span style=\"font-weight: bold;\">What is a server?</span>\r\nA server is a software or hardware device that accepts and responds to requests made over a network. The device that makes the request, and receives a response from the server, is called a client. On the Internet, the term "server" commonly refers to the computer system which receives a request for a web document and sends the requested information to the client.\r\n<span style=\"font-weight: bold;\">What are they used for?</span>\r\nServers are used to manage network resources. For example, a user may set up a server to control access to a network, send/receive an e-mail, manage print jobs, or host a website. They are also proficient at performing intense calculations. Some servers are committed to a specific task, often referred to as dedicated. However, many servers today are shared servers which can take on the responsibility of e-mail, DNS, FTP, and even multiple websites in the case of a web server.\r\n<span style=\"font-weight: bold;\">Why are servers always on?</span>\r\nBecause they are commonly used to deliver services that are constantly required, most servers are never turned off. Consequently, when servers fail, they can cause the network users and company many problems. To alleviate these issues, servers are commonly set up to be fault-tolerant.\r\n<span style=\"font-weight: bold;\">What are the examples of servers?</span>\r\nThe following list contains links to various server types:\r\n<ul><li>Application server;</li><li>Blade server;</li><li>Cloud server;</li><li>Database server;</li><li>Dedicated server;</li><li>Domain name service;</li><li>File server;</li><li>Mail server;</li><li>Print server;</li><li>Proxy server;</li><li>Standalone server;</li><li>Web server.</li></ul>\r\n<span style=\"font-weight: bold;\">How do other computers connect to a server?</span>\r\nWith a local network, the server connects to a router or switch that all other computers on the network use. Once connected to the network, other computers can access that server and its features. For example, with a web server, a user could connect to the server to view a website, search, and communicate with other users on the network.\r\nAn Internet server works the same way as a local network server, but on a much larger scale. The server is assigned an IP address by InterNIC, or by a web host.\r\nUsually, users connect to a server using its domain name, which is registered with a domain name registrar. When users connect to the domain name (such as "computerhope.com"), the name is automatically translated to the server's IP address by a DNS resolver.\r\nThe domain name makes it easier for users to connect to the server because the name is easier to remember than an IP address. Also, domain names enable the server operator to change the IP address of the server without disrupting the way that users access the server. The domain name can always remain the same, even if the IP address changes.\r\n<span style=\"font-weight: bold;\">Where are servers stored?</span>\r\nIn a business or corporate environment, a server and other network equipment are often stored in a closet or glasshouse. These areas help isolate sensitive computers and equipment from people who should not have access to them.\r\nServers that are remote or not hosted on-site are located in a data center. With these types of servers, the hardware is managed by another company and configured remotely by you or your company.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Server.png"}],"characteristics":[],"concurentProducts":[],"jobRoles":[],"organizationalFeatures":[],"complementaryCategories":[],"solutions":[],"materials":[],"useCases":[],"best_practices":[],"values":[],"implementations":[]},{"id":4938,"logoURL":"https://old.roi4cio.com/fileadmin/user_upload/ibm_filenet_p8_platform.png","logo":true,"scheme":false,"title":"IBM FileNet P8 Platform","vendorVerified":0,"rating":"0.00","implementationsCount":1,"suppliersCount":0,"supplierPartnersCount":100,"alias":"ibm-filenet-p8-platform","companyTitle":"IBM","companyTypes":["supplier","vendor"],"companyId":177,"companyAlias":"ibm","description":"The FileNet P8 platform offers enterprise-level scalability and flexibility to handle the most demanding content challenges, the most complex business processes, and integration to all your existing systems. FileNet P8 is a reliable, scalable, and highly available enterprise platform that enables you to capture, store, manage, secure, and process information to increase operational efficiency and lower total cost of ownership. FileNet P8 enables you to streamline and automate business processes, access and manage all forms of content, and automate records management to help meet compliance needs.<br />\r\nThe FileNet P8 family of products includes back-end services, development tools, and applications that address enterprise content and process management requirements.\r\n<span style=\"font-weight: bold;\">Content management</span><br />At the core of the platform are repository services for capturing, managing, and storing your business-related digital assets. Multiple repositories, called object stores, can be created and managed within a single system to serve your business requirements.<br />\r\n<span style=\"font-weight: bold;\">Integration with external content repositories</span><br />\r\nIBM® FileNet Content Federation Services enables you to integrate data in an external repository with FileNet P8 and access the documents as though they are stored in an object store. An external repository acts like a virtual storage area for the Content Platform Engine system.<br />\r\n<span style=\"font-weight: bold;\">Workflow management</span><br />\r\nFileNet P8 lets you create, modify, manage, analyze, and simulate business processes, or workflows, that are performed by applications, enterprise users, and external users such as partners and customers.<br />\r\n<span style=\"font-weight: bold;\">Application environment</span><br />\r\nThe FileNet P8 platform includes an application environment to provide users with enterprise content management (ECM) functionality. IBM Content Navigator is a web client that provides users with a console for working with content from multiple content servers, including content that is stored on Content Platform Engine object stores.<br />\r\n<span style=\"font-weight: bold;\">Application integration</span><br />\r\nFileNet P8 tools help you integrate with various vendor applications.<br />\r\n<span style=\"font-weight: bold;\">Records management</span><br />\r\nDesigned to solve today's process-oriented enterprise records management and compliance needs, IBM Enterprise Records is a records management solution that can help companies manage risk through effective, enforceable records management policy, for achievable and cost-effective compliance. IBM Enterprise Records is fully integrated with the FileNet P8 platform.<br />\r\n<span style=\"font-weight: bold;\">System management</span><br />\r\nFileNet P8 provides a complete set of system administration tools that allow for monitoring, validation, and configuration changes from a central location with a dispersed deployment. These tools, described in the following sections, can be used to manage the entire system.<br />\r\n<span style=\"font-weight: bold;\">Enterprise capabilities</span><br />\r\nFileNet P8 components provide the enterprise-level capabilities that are required for solving critical business requirements. This section enumerates these product characteristics.","shortDescription":"IBM® FileNet® P8 Platform is a next-generation, unified enterprise foundation for the integrated IBM FileNet P8 products. ","type":null,"isRoiCalculatorAvaliable":false,"isConfiguratorAvaliable":false,"bonus":100,"usingCount":9,"sellingCount":20,"discontinued":0,"rebateForPoc":0,"rebate":0,"seo":{"title":"IBM FileNet P8 Platform","keywords":"","description":"The FileNet P8 platform offers enterprise-level scalability and flexibility to handle the most demanding content challenges, the most complex business processes, and integration to all your existing systems. FileNet P8 is a reliable, scalable, and highly avail","og:title":"IBM FileNet P8 Platform","og:description":"The FileNet P8 platform offers enterprise-level scalability and flexibility to handle the most demanding content challenges, the most complex business processes, and integration to all your existing systems. FileNet P8 is a reliable, scalable, and highly avail","og:image":"https://old.roi4cio.com/fileadmin/user_upload/ibm_filenet_p8_platform.png"},"eventUrl":"","translationId":4939,"dealDetails":null,"roi":null,"price":null,"bonusForReference":null,"templateData":[],"testingArea":"","categories":[{"id":182,"title":"CMS - Content management system","alias":"cms-content-management-system","description":"A content management system (CMS) manages the creation and modification of digital content. It typically supports multiple users in a collaborative environment.\r\nCMS features vary widely. Most CMSs include Web-based publishing, format management, history editing and version control, indexing, search and retrieval. By their nature, content management systems support the separation of content and presentation.\r\nContent management software solutions are typically used for enterprise content management systems (ECM) and web site content management systems (WCM). An ECM facilitates collaboration in the workplace by integrating document management, digital asset management and records retention functionalities, and providing end users with role-based access to the organization's digital assets. A WCM facilitates collaborative authoring for websites. ECM software often includes a WCM publishing functionality, but ECM webpages typically remain behind the organization's firewall.\r\nBoth enterprise content management and web content management systems have two components: a content management application (CMA) and a content delivery application (CDA). The CMA is a graphical user interface (GUI) that allows the user to control the design, creation, modification and removal of content from a website without needing to know anything about HTML. The CDA component provides the back-end services that support management and delivery of the content once it has been created in the CMA.\r\nDigital asset management systems are another type of CMS. They manage content with a clearly defined author or ownership, such as documents, movies, pictures, phone numbers, and scientific data. Companies also use CMSs to store, control, revise, and publish documentation.\r\nBased on market share statistics, the most popular CMS is WordPress, used by more than 28% of all websites on the Internet, and by 59% of all websites using a known content management system, followed by Joomla and Drupal.\r\n<span style=\"font-weight: bold;\">Content management systems typically provide the following features:</span>\r\n<ul><li>Search engine optimization</li><li>Integrated and online documentation</li><li>Modularity and extensibility</li><li>User and group functionality</li><li>Templating support for changing designs</li><li>Installation and upgrade wizards</li><li>Integrated audit logs</li><li>Compliance with various accessibility frameworks and standards, such as WAI-ARIA</li><li>Reduced need to code from scratch</li><li>Unified user experience</li><li>Version control</li><li>Edit permission management</li></ul>","materialsDescription":"<h1 class=\"align-center\"> <span style=\"font-weight: normal;\">What is a CMS?</span></h1>\r\nAnswer: CMS is an acronym for "Content Management System". You may see some variations on this term, but they all refer to the same concept. Variations include:\r\n<ul><li>Content Management System</li><li>Web CMS</li><li>Web Content Management System</li><li>CMS Platform</li><li>Content Management Platform</li><li>CMS System</li></ul>\r\n<h1 class=\"align-center\"><span style=\"font-weight: normal;\">What does a CMS do?</span></h1>\r\n<p class=\"align-left\">In it's simplest terms, Content Management Systems are designed to help users create and manage their websites. Content management solutions help webmasters manage the many different resources, content types and various data that make up modern web sites.</p>\r\n<p class=\"align-left\">At a minimum, modern websites make use of HTML, CSS, JavaScript and images (jpeg, gif, png, etc) to create web content for visitors to read. At the core of every CMS is the ability to organize these resources and generate valid content that can be read by web browsers. </p>\r\n<p class=\"align-left\">More advanced websites have interactive components (comment sections, forums, e-commerce...) that requires server software to validate and save user submitted content.<br />All of the top CMS platforms have features built-in or available for download as addons for all of these features.</p>\r\n<h1 class=\"align-center\"><span style=\"font-weight: normal;\">What are the main types of CMS?</span></h1>\r\n<p class=\"align-left\"><span style=\"font-weight: bold;\">Simple CMS.</span> This system is used to create simple websites that contain several pages using simple control systems. Simple content management systems consist of several modules that are set one time. These CMSs are free and are available on the internet. Among their disadvantages are the inability to change settings, low transmission capacity, inability to create pages dynamically and the inability of ato delegateion of administrator’s credentials to others.<span style=\"font-weight: bold;\"></span></p>\r\n<p class=\"align-left\"> </p>\r\n<p class=\"align-left\"><span style=\"font-weight: bold;\">Template CMS.</span> It consists of modules as well, but its structure is more complex if compared to a simple CMS. Template CMS has high transmission capacity, around 50,000 inquiries. Also, it has the support of dynamic pages and the ability to delegate the administrator’s credentials. Many template systems are used to create website content because they are easy to use.<span style=\"font-weight: bold;\"></span></p>\r\n<p class=\"align-left\"> </p>\r\n<p class=\"align-left\"><span style=\"font-weight: bold;\">Professional CMS</span>. This type of CMS has a higher level of complexity. You may change the structure of internet resources. Additional modules can be attached to these systems. These systems are used to create information portals or massive projects. As a rule, these CMSs are a paid resource.<span style=\"font-weight: bold;\"></span></p>\r\n<p class=\"align-left\"> </p>\r\n<p class=\"align-left\"><span style=\"font-weight: bold;\">Universal CMS</span>. Universal systems have wide functionality and ample opportunities to develope websites of any complexity. They support the functions of changing the structure, creating dynamic pages, modification of settings and credential distribution. Universal CMS is quite expensive. These CMSs are used for work with large portals and web-projects that require high functionality and dynamics.<br /><br /><br /><br /><br /><br /></p>","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/CMS_-_content_management_system.png"},{"id":66,"title":"BPM - Business Process Management","alias":"bpm-business-process-management","description":"<span style=\"font-weight: bold; \">Business process management (BPM)</span> is a discipline in operations management in which people use various methods to discover, model, analyze, measure, improve, optimize, and automate business processes. BPM focuses on improving corporate performance by managing business processes. Any combination of methods used to manage a company's business processes is BPM. Processes can be structured and repeatable or unstructured and variable.\r\nAs an approach, BPM sees processes as important assets of an organization that must be understood, managed, and developed to announce and deliver value-added products and services to clients or customers. This approach closely resembles other total quality management or continuous improvement process methodologies. ISO 9000 promotes the process approach to managing an organization.<span style=\"font-weight: bold; \"></span>\r\n<span style=\"font-weight: bold; \">Successfully employing BPM usually involves the following:</span>\r\nOrganizing around outcomes not tasks to ensure the proper focus is maintained\r\nCorrecting and improving processes before (potentially) automating them; otherwise all you’ve done is make the mess run faster\r\nEstablishing processes and assigning ownership lest the work and improvements simply drift away – and they will, as human nature takes over and the momentum peters out\r\nStandardizing processes across the enterprise so they can be more readily understood and managed, errors reduced, and risks mitigated\r\nEnabling continuous change so the improvements can be extended and propagated over time\r\nImproving existing processes, rather than building radically new or “perfect” ones, because that can take so long as to erode or negate any gains achieved\r\n<span style=\"font-weight: bold; \">Business Process Management Software (BPMS)</span> is a process automation tool. It helps you map out your everyday processes to identify and eliminate bottlenecks, control your company’s costs, make your day-to-day processes as efficient as possible, and ensure the effectiveness of the people involved in your processes. A business process management solution to a company’s needs begins with the alignment of business goals with an eye toward creating value through process change initiatives. This alignment leads to a thorough understanding and design of representative processes typically following an industry standard framework \r\nA BPM based foundation provides for complete lifecycle management of business processes, integration across technologies, and imbeds efficiency among people, processes, and technologies.\r\nCommercial business process management tools tend to center on the automation of business processes, essentially moving them from manual pen-and-paper endeavors to effortless automated transactions. BPM software products track how business information is used and then maps the relevant business process and ensure that transactions are done accordingly. This effectively shows where data and process bottlenecks occur and highlights various deficiencies in business processes, including areas where resources are wasted, allowing managers streamline and optimize those processes.\r\n<p class=\"align-center\"><span style=\"font-weight: bold; \">There are three key types of BPMS:</span></p>\r\n<span style=\"font-weight: bold; \">Efficiency Monitors:</span>Monitors every system of the enterprise for inefficiency in the processes by following it from start to finish. BPM program accurately pinpoints weakness and bottlenecks where customers might get frustrated and discontinue transactions and processes.\r\n<span style=\"font-weight: bold; \">Workflow Software:</span> Uses detailed maps of an existing processes and tries to streamline them by optimizing certain steps. BPM workflow software cannot suggest improvements to the process, only optimize it, so this software is only as good as the process itself.\r\n<span style=\"font-weight: bold; \">Enterprise Application Integration Tools:</span> A mixture of efficiency monitors, process and workflow management, EAI software is used to integrate legacy systems into new systems. This software can be used to map points for integrating old and new systems, optimizing their information-gathering characteristics and increasing the efficiency of system communications.<br /><br /><br />","materialsDescription":"<h1 class=\"align-center\">What Are the Types of Business Process Management Software?</h1>\r\n<p class=\"align-center\">There are <span style=\"font-weight: bold; \">three basic kinds</span> of BPM frameworks:</p>\r\n<span style=\"font-weight: bold; \">Horizontal frameworks.</span>They deal with design and development of business processes. They are generally focused on technology and reuse.\r\n<span style=\"font-weight: bold; \">Vertical BPM frameworks.</span> This focuses on specific sets of coordinated tasks, using pre-built templates which can be easily deployed and configured.\r\n<span style=\"font-weight: bold; \">Full-service BPM suites.</span> They have five basic components: Process discovery and project scoping; Process modeling and design; Business rules engine; Workflow engine; Simulation and testing.\r\n<p class=\"align-center\">There are <span style=\"font-weight: bold; \">two types of BPM software</span> as it pertains to deployment:<span style=\"font-weight: bold; \"></span></p>\r\n<p class=\"align-left\"><span style=\"font-weight: bold; \">On-premise</span> business process management (BPM). This has been the norm for most enterprises.</p>\r\n<span style=\"font-weight: bold; \">Software as a Service (SaaS).</span> Advances in cloud computing have led to an increased interest in various “software-on-demand” offerings.\r\n<h1 class=\"align-center\">What are BPM Tools?</h1>\r\n<span style=\"font-weight: bold; \">Business Process Management (BPM) tools</span> are used for automating, measuring and optimizing business processes. BPM automation tools use workflow and collaboration to provide meaningful metrics to business leaders.\r\n<span style=\"font-weight: bold; \">Misconceptions about BPM Tools.</span> There’s a common misconception that BPM tools do not easily demonstrate their benefit to the organization. While the benefit from using BPM tools can be hard to quantify, it can be expressed more effectively in terms of business value.\r\n<span style=\"font-weight: bold; \">Process Management Tools.</span> Tools that allow process managers (those that are responsible for organizing the process or activity) to secure the resources needed to execute it, and measure the results of the activity, providing rewards or corrective feedback when necessary. Process manager tools also allows process managers to change and improve the process whenever possible.\r\n<span style=\"font-weight: bold;\">Process Modeling Tools.</span> Software tools that let managers or analysts create business process diagrams. Simple tools only support diagramming. Professional Process Modeling Tools store each model element in a database so that they can be reused on other diagrams or updated. Many business process improvement software supports simulation or code generation.<br /><br /><br />","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/BPM_-_Business_Process_Management.png"},{"id":146,"title":"ECM - Enterprise Content Management","alias":"ecm-enterprise-content-management","description":"<span style=\"font-weight: bold; \">Enterprise content management (ECM)</span> extends the concept of content management by adding a time line for each content item and possibly enforcing processes for the creation, approval and distribution of them. Systems that implement ECM generally provide a secure repository for managed items, be they analog or digital, that indexes them. They also include one or more methods for importing content to bring new items under management and several presentation methods to make items available for use. The key feature of ECM that distinguishes it from "simple" content management is that an ECM is at least cognizant of the processes and procedures of the enterprise it is created for, and as such is particular to it. \r\nECM as an umbrella term covers enterprise document management system, Web content management, search, collaboration, records management, digital asset management (DAM), workflow management, capture and scanning. ECM is primarily aimed at managing the life-cycle of information from initial publication or creation all the way through archival and eventual disposal. ECM enterprise content management software is delivered in four ways:\r\n- on-premises software (installed on an organization's own network)\r\n- software as a service (SaaS) (Web access to information that is stored on a software manufacturer's system)\r\n- a hybrid composed of both on-premises and SaaS components\r\n- Infrastructure as a Service (IaaS) (which refers to online services that abstract the user from the details of infrastructure like physical computing resources, location, data partitioning, scaling, security, backup etc.)\r\n<span style=\"font-weight: bold;\">ECM provides</span> a centralized platform where content can be held and disseminated in a manner that meets regulatory compliance requirements and risk management guidelines. An ECM achieves the latter two benefits by eliminating ad hoc processes that can expose an enterprise to regulatory compliance risks and other potential problems. Full-function enterprise content management solutions include features such as content taxonomies, auditing capabilities, check-in/check-out and other workflow controls and security mechanisms.\r\nAn <span style=\"font-weight: bold;\">effective ECM </span>can streamline access and business processes, eliminate bottlenecks by reducing storage, as well as paper and mailing needs, optimize security, maintain integrity and minimize overhead. All of these can lead to increased productivity. The first step is to document all the types of content that the organization deals with, the business processes its part of and who handles the content. \r\nECM software can be used to identify duplicate and near-duplicate content, allowing the organization to keep a few copies of a particular piece of content instead of hundreds. The best ECM software extends the reach of traditional ECM capabilities into previously isolated applications and information silos, such as ERP, CRM, SCM and HCM, to take the shape of a content services platform. Information can now flow across the enterprise to the people and processes—when, where and in whatever context it is needed.\r\nTo understand more specific ways it could help your company, consider these <span style=\"font-weight: bold; \">three types of ECM</span> software solutions.\r\n<span style=\"font-weight: bold; \">Web Content Management.</span> WCM puts control over the look and feel of a website in the hands of specific, key people. It’s used by organizations with relatively complex websites and strict brand guidelines, giving those key personnel the means to easily update, modify and publish content for the sites while adhering to the guidelines.\r\n<span style=\"font-weight: bold; \">Collaborative Content Management.</span> CCM enables multiple people to access and modify a single document, such as a legal document. It’s ideal for organizations that must manage projects involving multiple stakeholders. CCM makes it easy to work together while keeping track of, and updating, the most-current version of the document.\r\n<span style=\"font-weight: bold; \">Transactional Content Management.</span> This type of ECM document management is designed for organizations that repeatedly use varied types of content, including records, paper documents, and digital files. TCM solutions capture content from various channels, classify it, store it, create an automated workflow to ensure the right user receives the content at the right time, and even deletes documents when they’re no longer needed, all while working seamlessly with other apps and databases, ensuring all of that content is available throughout the company.<br /><br /><br />\r\n\r\n","materialsDescription":"<h1 class=\"align-center\"> <span style=\"font-weight: bold; \">What is Enterprise Content Management (ECM)?</span></h1>\r\nEnterprise Content Management is the organization of structured and unstructured documents using technology and software that allows your organization to “work smarter, not harder.” As technology advanced and everything became digital, organizations needed a new way to store and access files, leading to the creation of ECM. \r\nECM document management system consists of four main points:\r\n<ul><li><span style=\"font-weight: bold; \">Capture:</span> Capturing information from hardcopy documents or online forms and transferring it into the system</li><li><span style=\"font-weight: bold; \">Manage:</span> Managing the captured data in a structured format that allows quick and easy retrieval</li><li><span style=\"font-weight: bold; \">Storing:</span> Securely storing files in a central repository that can be accessed from multiple locations</li><li><span style=\"font-weight: bold; \">Delivery:</span> Implementation of business process workflows to automatically move documents from one step to the next</li></ul>\r\n<h1 class=\"align-center\"><span style=\"font-weight: bold; \">Five ways ECM software can benefit your organization</span></h1>\r\n<span style=\"font-weight: bold; \">Basic file sharing and library services.</span> At its core, enterprise document management software begins with basic file sharing and library services managed within a networked repository. Individuals and groups with predefined access rights and permissions can access the repository and then create, read, update and delete files stored within it.\r\nMany ECM applications support Content Management Interoperability Services, an industry standard that allows different vendors' products to interoperate; this is an essential capability within large enterprises that maintain content management tools from multiple vendors.\r\n<span style=\"font-weight: bold; \">Content governance, compliance and records management.</span> For many organizations, managing business documents or other content types is a critical use case for ECM. Companies subject to compliance or other industry regulations need document content management system software to capture, manage, archive and ultimately dispose of files after a predefined period.\r\nECM can ensure that only individuals with predefined permissions - determined by access controls - can update or view documents stored within a repository. An organization can thus manage document modification.\r\nIn addition, enterprise content management tools can log all actions, providing an organization with the capabilities to maintain an auditable record of all the changes to documents within the repository.\r\n<span style=\"font-weight: bold; \">Business process management.</span> Companies also use ECM to establish workflows that span departments and geographies to support extended enterprise and inter-enterprise business processes.\r\nMost ECM software provides tools to help both technical and non-technical business users define business processes. Most applications provide audit controls to track each step of the process and analytic capabilities to help identify inefficiencies and streamline business procedures.\r\n<span style=\"font-weight: bold; \">Content repositories linked to other enterprise applications.</span> Some companies use electronic content management software as a repository for documents created by other enterprise applications, including CRM, ERP, HR and financial systems. These enterprise systems can seamlessly access, view or modify content managed by the ECM.\r\n<span style=\"font-weight: bold; \">Enabling mobile and remote workforces.</span> Content management tools often include functionality to allow remote workers to access content from mobile devices. This is an increasingly important feature for many companies.\r\nMobile capabilities also enable new kinds of data capture and presentation functionalities. By combining content management capabilities with other data, for example, a political canvasser can use a tablet to enter new information about a political donor without having to start from scratch, as some of that information is already stored in a content management system. \r\n\r\n","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/ECM_-_Enterprise_Content_Management.png"}],"characteristics":[],"concurentProducts":[],"jobRoles":[],"organizationalFeatures":[],"complementaryCategories":[],"solutions":[],"materials":[],"useCases":[],"best_practices":[],"values":[],"implementations":[]},{"id":4956,"logoURL":"https://old.roi4cio.com/fileadmin/user_upload/IBM_BladeCenter.jpg","logo":true,"scheme":false,"title":"IBM BladeCenter","vendorVerified":0,"rating":"0.00","implementationsCount":2,"suppliersCount":0,"supplierPartnersCount":100,"alias":"ibm-bladecenter","companyTitle":"IBM","companyTypes":["supplier","vendor"],"companyId":177,"companyAlias":"ibm","description":"Introduced in 2002, based on engineering work started in 1999, the IBM BladeCenter was relatively late to the blade server market. It differed from prior offerings in that it offered a range of x86 Intel server processors and input/output (I/O) options. In February 2006, IBM introduced the BladeCenter H with switch capabilities for 10 Gigabit Ethernet and InfiniBand 4X.<br />A web site called Blade.org was available for the blade computing community through about 2009.<br />In 2012 the replacement Flex System was introduced.<br /><span style=\"font-weight: bold;\"><br />IBM BladeCenter (E)</span>\r\nThe original IBM BladeCenter was later marketed as BladeCenter E[3] with 14 blade slots in 7U. Power supplies have been upgraded through the life of the chassis from the original 1200 to 1400, 1800, 2000 and 2320 watt.<br />\r\nThe BladeCenter (E) was co-developed by IBM and Intel and included:\r\n<ul><li>14 blade slots in 7U</li></ul>\r\n<ul><li>Shared media tray with optical drive, floppy drive and USB 1.1 port</li></ul>\r\n<ul><li>One (upgradable to two) management modules</li></ul>\r\n<ul><li>Two (upgradable to four) power supplies</li></ul>\r\n<ul><li>Two redundant high-speed blowers</li></ul>\r\n<ul><li>Two slots for Gigabit Ethernet switches (can also have optical or copper pass-through)</li></ul>\r\n<ul><li>Two slots for optional switch or pass-through modules, can have additional Ethernet, Fibre Channel, InfiniBand or Myrinet 2000 functions.</li></ul>\r\n<br /><span style=\"font-weight: bold;\">IBM BladeCenter T</span><br />\r\nBladeCenter T is the telecommunications company version of the original IBM BladeCenter, available with either AC or DC (48 V) power. Has 8 blade slots in 8U, but uses the same switches and blades as the regular BladeCenter E. To keep NEBS Level 3 / ETSI compliant special Network Equipment-Building System (NEBS) compliant blades are available.<br /><br /><span style=\"font-weight: bold;\">IBM BladeCenter H</span><br />\r\nUpgraded BladeCenter design with high-speed fabric options. Fits 14 blades in 9U. Backwards compatible with older BladeCenter switches and blades.\r\n<ul><li>14 blade slots in 9U</li></ul>\r\n<ul><li>Shared Media tray with Optical Drive and USB 2.0 port</li></ul>\r\n<ul><li>One (upgradable to two) Advanced Management Modules</li></ul>\r\n<ul><li>Two (upgradable to four) Power supplies</li></ul>\r\n<ul><li>Two redundant High-speed blowers</li></ul>\r\n<ul><li>Two slots for Gigabit Ethernet switches (can also have optical or copper pass-through)</li></ul>\r\n<ul><li>Two slots for optional switch or pass-through modules, can have additional Ethernet, Fibre Channel, InfiniBand or Myrinet 2000 functions.</li></ul>\r\n<ul><li>Four slots for optional high-speed switches or pass-through modules, can have 10 Gbit Ethernet or InfiniBand 4X.</li></ul>\r\n<ul><li>Optional Hard-wired serial port capability</li></ul>\r\n<br /><span style=\"font-weight: bold;\">IBM BladeCenter HT</span><br />\r\nBladeCenter HT is the telecommunications company version of the IBM BladeCenter H, available with either AC or DC (48 V) power. Has 12 blade slots in 12U, but uses the same switches and blades as the regular BladeCenter H. But to keep NEBS Level 3 / ETSI compliant special NEBS compliant blades are available.<br /><br /><span style=\"font-weight: bold;\">IBM BladeCenter S</span><br />\r\nTargets mid-sized customers by offering storage inside the BladeCenter chassis, so no separate external storage needs to be purchased. It can also use 110 V power in the North American market, so it can be used outside the datacenter. When running at 120 V , the total chassis capacity is reduced.\r\n<ul><li>6 blade slots in 7U</li></ul>\r\n<ul><li>Shared Media tray with Optical Drive and 2x USB 2.0 ports</li></ul>\r\n<ul><li>Up to 12 hot-swap 3.5" (or 24 2.5") SAS or SATA drives with RAID 0, 1 and 1E capability, RAID 5 and SAN capabilities optional with two SAS RAID controllers</li></ul>\r\n<ul><li>Two optional Disk Storage Modules for HDDs, six 3.5-inch SAS/SATA drives each.</li></ul>\r\n<ul><li>4 hot-swap I/O switch module bays</li></ul>\r\n<ul><li>1 Advanced Management Module as standard (no option for secondary Management Module)</li></ul>\r\n<ul><li>Two 950/1450-watt, hot-swap power modules and ability to have two optional 950/1450-watt power modules, offering redundancy and power for robust configurations.</li></ul>\r\n<ul><li>Four hot-swap redundant blowers, plus one fan in each power supply.</li></ul>","shortDescription":"The IBM BladeCenter was IBM's blade server architecture, until it was replaced by Flex System.","type":null,"isRoiCalculatorAvaliable":false,"isConfiguratorAvaliable":false,"bonus":100,"usingCount":13,"sellingCount":12,"discontinued":0,"rebateForPoc":0,"rebate":0,"seo":{"title":"IBM BladeCenter","keywords":"","description":"Introduced in 2002, based on engineering work started in 1999, the IBM BladeCenter was relatively late to the blade server market. It differed from prior offerings in that it offered a range of x86 Intel server processors and input/output (I/O) options. In Feb","og:title":"IBM BladeCenter","og:description":"Introduced in 2002, based on engineering work started in 1999, the IBM BladeCenter was relatively late to the blade server market. It differed from prior offerings in that it offered a range of x86 Intel server processors and input/output (I/O) options. In Feb","og:image":"https://old.roi4cio.com/fileadmin/user_upload/IBM_BladeCenter.jpg"},"eventUrl":"","translationId":4957,"dealDetails":null,"roi":null,"price":null,"bonusForReference":null,"templateData":[],"testingArea":"","categories":[{"id":517,"title":"Blade System","alias":"blade-system","description":" A blade server is a stripped-down server computer with a modular design optimized to minimize the use of physical space and energy. Blade servers have many components removed to save space, minimize power consumption and other considerations, while still having all the functional components to be considered a computer. Unlike a rack-mount server, a blade server needs a blade enclosure, which can hold multiple blade servers, providing services such as power, cooling, networking, various interconnects and management. Together, blades and the blade enclosure form a blade system. Different blade providers have differing principles regarding what to include in the blade itself, and in the blade system as a whole.\r\nIn a standard server-rack configuration, one rack unit or 1U—19 inches (480 mm) wide and 1.75 inches (44 mm) tall—defines the minimum possible size of any equipment. The principal benefit and justification of blade computing relates to lifting this restriction so as to reduce size requirements. The most common computer rack form-factor is 42U high, which limits the number of discrete computer devices directly mountable in a rack to 42 components. Blades do not have this limitation. As of 2014, densities of up to 180 servers per blade system (or 1440 servers per rack) are achievable with blade systems.\r\nEnclosure (or chassis) performs many of the non-core computing services found in most computers. Non-blade systems typically use bulky, hot and space-inefficient components, and may duplicate these across many computers that may or may not perform at capacity. By locating these services in one place and sharing them among the blade computers, the overall utilization becomes higher. The specifics of which services are provided varies by vendor.\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">Power.</span></span> Computers operate over a range of DC voltages, but utilities deliver power as AC, and at higher voltages than required within computers. Converting this current requires one or more power supply units (or PSUs). To ensure that the failure of one power source does not affect the operation of the computer, even entry-level servers may have redundant power supplies, again adding to the bulk and heat output of the design.\r\nThe blade enclosure's power supply provides a single power source for all blades within the enclosure. This single power source may come as a power supply in the enclosure or as a dedicated separate PSU supplying DC to multiple enclosures. This setup reduces the number of PSUs required to provide a resilient power supply.\r\nThe popularity of blade servers, and their own appetite for power, has led to an increase in the number of rack-mountable uninterruptible power supply (or UPS) units, including units targeted specifically towards blade servers (such as the BladeUPS).\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">Cooling.</span></span> During operation, electrical and mechanical components produce heat, which a system must dissipate to ensure the proper functioning of its components. Most blade enclosures, like most computing systems, remove heat by using fans.\r\nA frequently underestimated problem when designing high-performance computer systems involves the conflict between the amount of heat a system generates and the ability of its fans to remove the heat. The blade's shared power and cooling means that it does not generate as much heat as traditional servers. Newer blade-enclosures feature variable-speed fans and control logic, or even liquid cooling systems that adjust to meet the system's cooling requirements.\r\nAt the same time, the increased density of blade-server configurations can still result in higher overall demands for cooling with racks populated at over 50% full. This is especially true with early-generation blades. In absolute terms, a fully populated rack of blade servers is likely to require more cooling capacity than a fully populated rack of standard 1U servers. This is because one can fit up to 128 blade servers in the same rack that will only hold 42 1U rack mount servers.\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">Networking.</span></span> Blade servers generally include integrated or optional network interface controllers for Ethernet or host adapters for Fibre Channel storage systems or converged network adapter to combine storage and data via one Fibre Channel over Ethernet interface. In many blades at least one interface is embedded on the motherboard and extra interfaces can be added using mezzanine cards.\r\nA blade enclosure can provide individual external ports to which each network interface on a blade will connect. Alternatively, a blade enclosure can aggregate network interfaces into interconnect devices (such as switches) built into the blade enclosure or in networking blades.\r\nBlade servers function well for specific purposes such as web hosting, virtualization, and cluster computing. Individual blades are typically hot-swappable. As users deal with larger and more diverse workloads, they add more processing power, memory and I/O bandwidth to blade servers. Although blade server technology in theory allows for open, cross-vendor system, most users buy modules, enclosures, racks and management tools from the same vendor.\r\nEventual standardization of the technology might result in more choices for consumers; as of 2009 increasing numbers of third-party software vendors have started to enter this growing field.\r\nBlade servers do not, however, provide the answer to every computing problem. One can view them as a form of productized server-farm that borrows from mainframe packaging, cooling, and power-supply technology. Very large computing tasks may still require server farms of blade servers, and because of blade servers' high power density, can suffer even more acutely from the heating, ventilation, and air conditioning problems that affect large conventional server farms.","materialsDescription":" <span style=\"font-weight: bold;\">What is blade server?</span>\r\nA blade server is a server chassis housing multiple thin, modular electronic circuit boards, known as server blades. Each blade is a server in its own right, often dedicated to a single application. The blades are literally servers on a card, containing processors, memory, integrated network controllers, an optional Fiber Channel host bus adaptor (HBA) and other input/output (IO) ports.\r\nBlade servers allow more processing power in less rack space, simplifying cabling and reducing power consumption. According to a SearchWinSystems.com article on server technology, enterprises moving to blade servers can experience as much as an 85% reduction in cabling for blade installations over conventional 1U or tower servers. With so much less cabling, IT administrators can spend less time managing the infrastructure and more time ensuring high availability.\r\nEach blade typically comes with one or two local ATA or SCSI drives. For additional storage, blade servers can connect to a storage pool facilitated by a network-attached storage (NAS), Fiber Channel, or iSCSI storage-area network (SAN). The advantage of blade servers comes not only from the consolidation benefits of housing several servers in a single chassis, but also from the consolidation of associated resources (like storage and networking equipment) into a smaller architecture that can be managed through a single interface.\r\nA blade server is sometimes referred to as a high-density server and is typically used in a clustering of servers that are dedicated to a single task, such as:\r\n<ul><li>File sharing</li><li>Web page serving and caching</li><li>SSL encrypting of Web communication</li><li>The transcoding of Web page content for smaller displays</li><li>Streaming audio and video content</li></ul>\r\nLike most clustering applications, blade servers can also be managed to include load balancing and failover capabilities.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Blade_System.png"}],"characteristics":[],"concurentProducts":[],"jobRoles":[],"organizationalFeatures":[],"complementaryCategories":[],"solutions":[],"materials":[],"useCases":[],"best_practices":[],"values":[],"implementations":[]},{"id":4962,"logoURL":"https://old.roi4cio.com/fileadmin/user_upload/IBM_LOGO.png","logo":true,"scheme":false,"title":"IBM POWER8 Server","vendorVerified":0,"rating":"0.00","implementationsCount":1,"suppliersCount":0,"supplierPartnersCount":100,"alias":"server-ibm-power8","companyTitle":"IBM","companyTypes":["supplier","vendor"],"companyId":177,"companyAlias":"ibm","description":"Reflecting the best in open source, big-data computing, IBM POWER8® servers offer the overwhelming processing strength of Linux on IBM Power® and the ability to engage in deep learning.<br />POWER8 servers provide easy-to-deploy cloud solutions and support SAP HANA workloads. POWER8 servers also work in conjunction with IBM Watson® — actually helping to make Watson even smarter.\r\n\r\n<span style=\"font-weight: bold;\">FEATURES</span><br />\r\n<span style=\"font-style: italic;\"><span style=\"font-weight: bold;\">Parallel processing power</span></span><br />\r\nThe POWER8 server delivers parallel processing of data queries, enabling it to resolve queries faster than other processor architectures.<br /><span style=\"font-style: italic;\"><span style=\"font-weight: bold;\"></span></span>\r\n<span style=\"font-style: italic;\"><span style=\"font-weight: bold;\">Boosted memory bandwidth</span></span><br />\r\nComputer-assisted personal interface (CAPI) enables the processor to talk directly to the flash drives and use them as an extension of its own memory. The data being cached for in-memory databases can be accessed faster than on any other platform.<br /><br /><span style=\"font-style: italic;\"><span style=\"font-weight: bold;\">Complex analytical capabilities </span></span><br />\r\nIBM Watson ingests large amounts of structured and semi-structured data, making it ideal for environments that previously would have taken several data scientists to develop the necessary queries to extract key information from the data. <br /><br /><span style=\"font-style: italic;\"><span style=\"font-weight: bold;\">Cost savings with cloud computing</span></span><br />\r\nCompanies can achieve significant cost savings by using cloud computing to help them more intelligently manage, store and access data.<br /><br /><span style=\"font-weight: bold;\">BENEFITS</span><br />\r\n<span style=\"font-style: italic;\"><span style=\"font-weight: bold;\">Gain fast access to Linux compute in the cloud</span></span><br />\r\nEasily extend your current infrastructure into the cloud and get developers up and running on Linux fast. <br /><br /><span style=\"font-style: italic;\"><span style=\"font-weight: bold;\">Host data and analytics in Linux</span></span><br />\r\nTest, drive and port data and analytics solutions to Linux. Get a secure environment to prove out performance characteristics for Linux workloads. <br /><br /><span style=\"font-style: italic;\"><span style=\"font-weight: bold;\">Integrate hybrid applications</span></span><br />\r\nMaximize performance and efficiency by ensuring systems are close to the data being analyzed. ","shortDescription":"IBM POWER8 servers combine high performance, storage and I/O to focus on increasing volumes of data, while maintaining system speed.","type":null,"isRoiCalculatorAvaliable":false,"isConfiguratorAvaliable":false,"bonus":100,"usingCount":5,"sellingCount":12,"discontinued":0,"rebateForPoc":0,"rebate":0,"seo":{"title":"IBM POWER8 Server","keywords":"","description":"Reflecting the best in open source, big-data computing, IBM POWER8® servers offer the overwhelming processing strength of Linux on IBM Power® and the ability to engage in deep learning.<br />POWER8 servers provide easy-to-deploy cloud solutions and support SAP","og:title":"IBM POWER8 Server","og:description":"Reflecting the best in open source, big-data computing, IBM POWER8® servers offer the overwhelming processing strength of Linux on IBM Power® and the ability to engage in deep learning.<br />POWER8 servers provide easy-to-deploy cloud solutions and support SAP","og:image":"https://old.roi4cio.com/fileadmin/user_upload/IBM_LOGO.png"},"eventUrl":"","translationId":4963,"dealDetails":null,"roi":null,"price":null,"bonusForReference":null,"templateData":[],"testingArea":"","categories":[{"id":35,"title":"Server","alias":"server","description":"In computing, a server is a computer program or a device that provides functionality for other programs or devices, called "clients". This architecture is called the client–server model, and a single overall computation is distributed across multiple processes or devices. Servers can provide various functionalities, often called "services", such as sharing data or resources among multiple clients, or performing computation for a client. A single server can serve multiple clients, and a single client can use multiple servers. A client process may run on the same device or may connect over a network to a server on a different device. Typical servers are database servers, file servers, mail servers, print servers, web servers, game servers, and application servers.\r\nClient–server systems are today most frequently implemented by (and often identified with) the request–response model: a client sends a request to the server, which performs some action and sends a response back to the client, typically with a result or acknowledgement. Designating a computer as "server-class hardware" implies that it is specialized for running servers on it. This often implies that it is more powerful and reliable than standard personal computers, but alternatively, large computing clusters may be composed of many relatively simple, replaceable server components.\r\nStrictly speaking, the term server refers to a computer program or process (running program). Through metonymy, it refers to a device used for (or a device dedicated to) running one or several server programs. On a network, such a device is called a host. In addition to server, the words serve and service (as noun and as verb) are frequently used, though servicer and servant are not. The word service (noun) may refer to either the abstract form of functionality, e.g. Web service. Alternatively, it may refer to a computer program that turns a computer into a server, e.g. Windows service. Originally used as "servers serve users" (and "users use servers"), in the sense of "obey", today one often says that "servers serve data", in the same sense as "give". For instance, web servers "serve web pages to users" or "service their requests".\r\nThe server is part of the client–server model; in this model, a server serves data for clients. The nature of communication between a client and server is request and response. This is in contrast with peer-to-peer model in which the relationship is on-demand reciprocation. In principle, any computerized process that can be used or called by another process (particularly remotely, particularly to share a resource) is a server, and the calling process or processes is a client. Thus any general purpose computer connected to a network can host servers. For example, if files on a device are shared by some process, that process is a file server. Similarly, web server software can run on any capable computer, and so a laptop or a personal computer can host a web server.\r\nWhile request–response is the most common client–server design, there are others, such as the publish–subscribe pattern. In the publish–subscribe pattern, clients register with a pub–sub server, subscribing to specified types of messages; this initial registration may be done by request–response. Thereafter, the pub–sub server forwards matching messages to the clients without any further requests: the server pushes messages to the client, rather than the client pulling messages from the server as in request–response.","materialsDescription":" <span style=\"font-weight: bold;\">What is a server?</span>\r\nA server is a software or hardware device that accepts and responds to requests made over a network. The device that makes the request, and receives a response from the server, is called a client. On the Internet, the term "server" commonly refers to the computer system which receives a request for a web document and sends the requested information to the client.\r\n<span style=\"font-weight: bold;\">What are they used for?</span>\r\nServers are used to manage network resources. For example, a user may set up a server to control access to a network, send/receive an e-mail, manage print jobs, or host a website. They are also proficient at performing intense calculations. Some servers are committed to a specific task, often referred to as dedicated. However, many servers today are shared servers which can take on the responsibility of e-mail, DNS, FTP, and even multiple websites in the case of a web server.\r\n<span style=\"font-weight: bold;\">Why are servers always on?</span>\r\nBecause they are commonly used to deliver services that are constantly required, most servers are never turned off. Consequently, when servers fail, they can cause the network users and company many problems. To alleviate these issues, servers are commonly set up to be fault-tolerant.\r\n<span style=\"font-weight: bold;\">What are the examples of servers?</span>\r\nThe following list contains links to various server types:\r\n<ul><li>Application server;</li><li>Blade server;</li><li>Cloud server;</li><li>Database server;</li><li>Dedicated server;</li><li>Domain name service;</li><li>File server;</li><li>Mail server;</li><li>Print server;</li><li>Proxy server;</li><li>Standalone server;</li><li>Web server.</li></ul>\r\n<span style=\"font-weight: bold;\">How do other computers connect to a server?</span>\r\nWith a local network, the server connects to a router or switch that all other computers on the network use. Once connected to the network, other computers can access that server and its features. For example, with a web server, a user could connect to the server to view a website, search, and communicate with other users on the network.\r\nAn Internet server works the same way as a local network server, but on a much larger scale. The server is assigned an IP address by InterNIC, or by a web host.\r\nUsually, users connect to a server using its domain name, which is registered with a domain name registrar. When users connect to the domain name (such as "computerhope.com"), the name is automatically translated to the server's IP address by a DNS resolver.\r\nThe domain name makes it easier for users to connect to the server because the name is easier to remember than an IP address. Also, domain names enable the server operator to change the IP address of the server without disrupting the way that users access the server. The domain name can always remain the same, even if the IP address changes.\r\n<span style=\"font-weight: bold;\">Where are servers stored?</span>\r\nIn a business or corporate environment, a server and other network equipment are often stored in a closet or glasshouse. These areas help isolate sensitive computers and equipment from people who should not have access to them.\r\nServers that are remote or not hosted on-site are located in a data center. With these types of servers, the hardware is managed by another company and configured remotely by you or your company.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Server.png"}],"characteristics":[],"concurentProducts":[],"jobRoles":[],"organizationalFeatures":[],"complementaryCategories":[],"solutions":[],"materials":[],"useCases":[],"best_practices":[],"values":[],"implementations":[]},{"id":6756,"logoURL":"https://old.roi4cio.com/fileadmin/user_upload/VX-2U-overview.png","logo":true,"scheme":false,"title":"Lenovo ThinkAgile VX","vendorVerified":0,"rating":"0.00","implementationsCount":0,"suppliersCount":0,"supplierPartnersCount":26,"alias":"lenovo-thinkagile-vx","companyTitle":"Lenovo","companyTypes":["vendor"],"companyId":318,"companyAlias":"lenovo","description":"<span lang=\"EN\">Lenovo ThinkAgile VX is an all-in-one hardware and software solution, Hyper-converged infrastructure (HCI), built on Lenovo ThinkSystem physical servers and VMware vSAN software. The system is designed to increase the flexibility and cost-effectiveness of IT infrastructures for enterprises of all sizes, and facilitate the transition to software-defined data centers. </span>\r\n<span lang=\"EN\"> </span>\r\n<span lang=\"EN\">ThinkAgile VX is a pre-configured and pre-configured solution that reduces data center complexity by combining servers, storage and virtualization software platforms into a common, managed resource pool. VX Series solutions are easily scalable, allowing customers to start with three physical nodes and gradually expand system capacity to almost any limit (hundreds or thousands of servers). </span>\r\n<span lang=\"EN\"> </span>\r\n<span lang=\"EN\">The platform is shipped with the Lenovo ThinkAgile Advantage service, through which Lenovo (or regional vendor partners) will install and configure the solution and provide training to the customer's employees who will operate the HCI system. As a result, the VX platform will be ready to use in a matter of hours, rather than a few days or weeks, as is the case with conventional IT solutions based on separate servers and storage systems.</span>","shortDescription":"Lenovo ThinkAgile VX is an hyper-converged infrastructure (HCI), built on Lenovo ThinkSystem physical servers and VMware vSAN software.","type":null,"isRoiCalculatorAvaliable":false,"isConfiguratorAvaliable":false,"bonus":100,"usingCount":0,"sellingCount":0,"discontinued":0,"rebateForPoc":0,"rebate":0,"seo":{"title":"Lenovo ThinkAgile VX","keywords":"","description":"<span lang=\"EN\">Lenovo ThinkAgile VX is an all-in-one hardware and software solution, Hyper-converged infrastructure (HCI), built on Lenovo ThinkSystem physical servers and VMware vSAN software. The system is designed to increase the flexibility and cost-effec","og:title":"Lenovo ThinkAgile VX","og:description":"<span lang=\"EN\">Lenovo ThinkAgile VX is an all-in-one hardware and software solution, Hyper-converged infrastructure (HCI), built on Lenovo ThinkSystem physical servers and VMware vSAN software. The system is designed to increase the flexibility and cost-effec","og:image":"https://old.roi4cio.com/fileadmin/user_upload/VX-2U-overview.png"},"eventUrl":"","translationId":6756,"dealDetails":null,"roi":null,"price":null,"bonusForReference":null,"templateData":[],"testingArea":"","categories":[{"id":509,"title":"Converged and Hyper Converged System","alias":"converged-and-hyper-converged-system","description":" Converged and hyper convergent infrastructures simplify support for virtual desktop infrastructure and desktop virtualization, as they are designed to be easy to install and perform complex tasks.\r\nConvergent infrastructure combines the four main components of a data center in one package: computing devices, storage devices, network devices, and server virtualization tools. Hyper-converged infrastructure allows for tighter integration of a larger number of components using software tools.\r\nIn both convergent and hyper-convergent infrastructure, all elements are compatible with each other. Thanks to this, you will be able to purchase the necessary storage devices and network devices for your company at a time, and they, as you know, are of great importance in the virtual desktops infrastructure. This allows you to simplify the process of deploying such an infrastructure - something that has been waiting for and what will be rejoiced by many companies that need to virtualize their desktop systems.\r\nDespite its value and innovation, there are several questions to these technologies regarding their intended use and differences. Let's try to figure out what functionality offers converged and hyper-convergent infrastructures and how they differ.","materialsDescription":" <span style=\"font-weight: bold;\">What is converged infrastructure?</span>\r\nConvergent infrastructure combines computing devices, storage, network devices and server virtualization tools in one chassis so that they can be managed from one place. Management capabilities may include the management of virtual desktop infrastructure, depending on the selected configuration and manufacturer.\r\nThe hardware included in the bundled converged infrastructure is pre-configured to support any targets: virtual desktop infrastructures, databases, special applications, and so on. But in fact, you do not have enough freedom to change the selected configuration.\r\nRegardless of the method chosen for extending the virtual desktop infrastructure environment, you should understand that subsequent vertical scaling will be costly and time-consuming. Adding individual components is becoming complex and depriving you of the many benefits of a converged infrastructure. Adding workstations and expanding storage capacity in a corporate infrastructure can be just as expensive, which suggests the need for proper planning for any virtual desktop infrastructure deployment.\r\nOn the other hand, all components of a converged infrastructure can work for a long time. For example, a complete server of such infrastructure works well even without the rest of the infrastructure components.\r\n<span style=\"font-weight: bold;\">What is a hyper-convergent infrastructure?</span>\r\nThe hyper-converged infrastructure was built on the basis of converged infrastructure and the concept of a software-defined data center. It combines all the components of the usual data center in one system. All four key components of the converged infrastructure are in place, but sometimes it also includes additional components, such as backup software, snapshot capabilities, data deduplication functionality, intermediate compression, global network optimization (WAN), and much more. Convergent infrastructure relies primarily on hardware, and software-defined data center often adapts to any hardware. In the hyper-convergent infrastructure, these two possibilities are combined.\r\nHyper-converged infrastructure is supported by one supplier. It can be managed as a single system with a single set of tools. To expand the infrastructure, you just need to install blocks of necessary devices and resources (for example, storage) into the main system block. And this is done literally on the fly.\r\nSince the hyper-convergent infrastructure is software-defined (that is, the operation of the infrastructure is logically separated from the physical equipment), the mutual integration of components is denser than in a conventional converged infrastructure, and the components themselves must be nearby to work correctly. This makes it possible to use a hyper-convergent infrastructure to support even more workloads than in the case of conventional converged infrastructure. This is explained by the fact that it has the possibility of changing the principle of definition and adjustment at the program level. In addition, you can make it work with specialized applications and workloads, which pre-configured converged infrastructures do not allow.\r\nHyper-converged infrastructure is especially valuable for working with a virtual desktop infrastructure because it allows you to scale up quickly without additional costs. Often, in the case of the classic virtual desktops infrastructure, things are completely different - companies need to buy more resources before scaling or wait for virtual desktops to use the allocated space and network resources, and then, in fact, add new infrastructure.\r\nBoth scenarios require significant time and money. But, in the case of hyperconvergent infrastructure, if you need to expand the storage, you can simply install the required devices in the existing stack. Scaling can be done quickly — for the time required to deliver the equipment. In this case, you do not have to go through the full procedure of re-evaluation and reconfiguration of the corporate infrastructure.\r\nIn addition, when moving from physical PCs to virtual workstations, you will need devices to perform all the computational tasks that laptops and PCs typically perform. Hyper-converged infrastructure will greatly help with this, as it often comes bundled with a large amount of flash memory, which has a positive effect on the performance of virtual desktops. This increases the speed of I / O operations, smoothes work under high loads, and allows you to perform scanning for viruses and other types of monitoring in the background (without distracting users).\r\nThe flexibility of the hyper-converged infrastructure makes it more scalable and cost-effective compared to the convergent infrastructure since it has the ability to add computing and storage devices as needed. The cost of the initial investment for both infrastructures is high, but in the long term, the value of the investment should pay off.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Converged_and_Hyper_Converged_System.png"}],"characteristics":[],"concurentProducts":[],"jobRoles":[],"organizationalFeatures":[],"complementaryCategories":[],"solutions":[],"materials":[],"useCases":[],"best_practices":[],"values":[],"implementations":[]},{"id":4982,"logoURL":"https://old.roi4cio.com/fileadmin/user_upload/Oracle_Flexcube.png","logo":true,"scheme":false,"title":"Oracle FLEXCUBE","vendorVerified":0,"rating":"0.00","implementationsCount":1,"suppliersCount":0,"supplierPartnersCount":150,"alias":"oracle-flexcube","companyTitle":"Oracle","companyTypes":["supplier","vendor"],"companyId":164,"companyAlias":"oracle","description":"The financial services industry continues to evolve amidst disruption caused by an unprecedented proliferation of digital technologies and connectivity. This disruption coupled with several regulatory directives is also driving the emergence of connected ecosystems. To successfully address disruption, protect their customer relationships and business, effectively comply with regulations, stay competitive and leverage the ecosystem opportunity, banks must double down on transforming their systems so that they can leverage digital technologies and connectivity to deliver better services, experiences and value for their customers.\r\nWith technology at the core of banking, modernization of core systems is the cornerstone of digital transformation in a bank. Oracle FLEXCUBE Universal Banking can help banks jumpstart digital transformation and leapfrog their capabilities to stay relevant, competitive and compliant in a fast evolving industry. With its modern, digital, shrink-wrapped, pre-configured, interoperable, scalable and connected capabilities, Oracle FLEXCUBE Universal Banking can help catapult banks to the fore front of digital innovation and leadership.<br />\r\n<span style=\"font-weight: bold;\">ACCELERATED DIGITAL TRANSFORMATION</span><br />\r\nBanks can transform the way they understand customers, develop new products and services, focus on new business lines, initiatives and deliver engaging experiences across multiple digital channels.<br />\r\n<span style=\"font-weight: bold;\">Oracle FLEXCUBE offers:</span>\r\n<ul><li>Multi-channel, multi-device and multi-vendor access coupled with best-in-class functionality that helps banks offer innovative services and frictionless experiences.</li></ul>\r\n<ul><li>Multi-dimensional views of customer data to enable a deeper understanding of customers as individuals and helps banks offer personalized services and experiences that are highly contextual and relevant.</li></ul>\r\n<ul><li>Mobility, service ubiquity and experience that drives stakeholder convenience.</li></ul>\r\n<span style=\"font-weight: bold;\">Key Business Benefits:</span>\r\n<ul><li>Offers business mobility, service experience, ubiquity and customer centricity</li></ul>\r\n<ul><li>Drives growth through customer centricity</li></ul>\r\n<ul><li>Enables an accelerated time-tomarket</li></ul>\r\n<ul><li>Enables customized transformation using best of breed point or pre-integrated solutions</li></ul>\r\n<ul><li>Has a connected architecture that enables collaboration</li></ul>\r\n<ul><li>Enables Open Banking and API monetization</li></ul>\r\n<ul><li>Offers operational and cost efficiencies from automated decisioning</li></ul>","shortDescription":"Решение Oracle FLEXCUBE предназначено для финансовых учреждений и предлагает клиентоориентированные основные банковские функции, функции интернет-обслуживания и управления частным капиталом. ","type":null,"isRoiCalculatorAvaliable":false,"isConfiguratorAvaliable":false,"bonus":100,"usingCount":12,"sellingCount":8,"discontinued":0,"rebateForPoc":0,"rebate":0,"seo":{"title":"Oracle FLEXCUBE","keywords":"","description":"The financial services industry continues to evolve amidst disruption caused by an unprecedented proliferation of digital technologies and connectivity. This disruption coupled with several regulatory directives is also driving the emergence of connected ecosy","og:title":"Oracle FLEXCUBE","og:description":"The financial services industry continues to evolve amidst disruption caused by an unprecedented proliferation of digital technologies and connectivity. This disruption coupled with several regulatory directives is also driving the emergence of connected ecosy","og:image":"https://old.roi4cio.com/fileadmin/user_upload/Oracle_Flexcube.png"},"eventUrl":"","translationId":4983,"dealDetails":null,"roi":null,"price":null,"bonusForReference":null,"templateData":[],"testingArea":"","categories":[{"id":158,"title":"Core Banking System","alias":"core-banking-system","description":"Core (centralized online real-time exchange) banking is a banking service provided by a group of networked bank branches where customers may access their bank account and perform basic transactions from any of the member branch offices.\r\nCore banking system is often associated with retail banking and many banks treat the retail customers as their core banking customers. Businesses are usually managed via the corporate banking division of the institution. Core banking covers basic depositing and lending of money.\r\nCore banking functions will include transaction accounts, loans, mortgages and payments. Banks make these services available across multiple channels like automated teller machines, Internet banking, mobile banking and branches.\r\nBanking software and network technology allow a bank to centralise its record keeping and allow access from any location.\r\nAdvancements in Internet and information technology reduced manual work in banks and increasing efficiency. Computer software is developed to perform core operations of banking like recording of transactions, passbook maintenance, interest calculations on loans and deposits, customer records, balance of payments and withdrawal. This software is installed at different branches of bank and then interconnected by means of computer networks based on telephones, satellite and the Internet.\r\nGartner defines a core banking system as a back-end system that processes daily banking transactions, and posts updates to accounts and other financial records. Core banking solutions typically include deposit, loan and credit-processing capabilities, with interfaces to general ledger systems and reporting tools. Core banking applications are often one of the largest single expense for banks and legacy software are a major issue in terms of allocating resources. Spending on these systems is based on a combination of service-oriented architecture and supporting technologies.\r\nMany banks implement custom applications for core banking. Others implement or customize commercial independent software vendor packages. Systems integrators like Cognizant, EdgeVerve Systems Limited, Capgemini, Accenture, IBM and Tata Consultancy Services implement these core banking packages at banks. More recently, entrants such as Probanx (since 2000) and Temenos (late 1990's) have also provided entry level core banking software, focussing on neo-banks and electronic money institutions.\r\nOpen source Technology in core banking products or software can help banks to maintain their productivity and profitability at the same time. ","materialsDescription":"<h1 class=\"align-center\"><span style=\"font-weight: normal;\">What is core banking solution (CBS)?</span></h1>\r\nToday Banking as a business has grown tremendously and transformed itself from only a deposits taking and loan providing system to an institution which provides an entire gamut of products and services under a wide umbrella. All such activities commenced by a bank is called Core Banking.\r\nCORE is an acronym for "Centralized Online Real-time Exchange", thus the bank’s branches can access applications from centralized data centers.\r\nOther than retail banking customers, core banking is now also being extended to address the requirements of corporate clients and provide for a comprehensive banking solution.<br />Digital core banking offer the following advantages to the bank:\r\n<ul><li>Improved operations which address customer demands and industry consolidation;</li><li>Errors due to multiple entries eradicated;</li><li>Easy ability to introduce new financial products and manage changes in existing products;</li><li>Seamless merging of back office data and self-service operations.</li></ul>\r\n<span style=\"font-weight: bold;\">Minimum features of Core Banking Solution:</span>\r\n<ol><li>Customer-On Boarding.</li><li>Managing deposits and withdrawals.</li><li>Transactions management.</li><li>Calculation and management.</li><li>Payments processing (cash, cheques /checks, mandates, NEFT, RTGS etc.).</li><li>Customer relationship management (CRM) activities.</li><li>Designing new banking products.</li><li>Loans disbursal and management.</li><li>Accounts management</li><li>Establishing criteria for minimum balances, interest rates, number of withdrawals allowed and so on.</li></ol>\r\n<h1 class=\"align-center\"><span style=\"font-weight: normal;\">Choosing the best core banking system software</span></h1>\r\n<p class=\"align-left\">Today, there are four primary core banking providers, FIS, Fiserv, Jack Henry and D+H, that have managed to eat up 96 percent of the market share (90 percent for banks under $1 billion in assets and 98 percent for banks over $1 billion in assets). But there are also some strong players rounding out the remaining 4 percent.<br />Deciding on a core banking software solutions is the first key task for banks and credit unions looking to make the switch. But the decision is not one to be taken lightly, as pointed out by Forbes “Core technologies are evolving into highly agile architectures, and the implications for making the wrong decision will be lasting — and could put banks at competitive risk.”</p>\r\n<p class=\"align-left\">To help your bank and credit union make the best use of your resources, Gartner identified the eight key criteria that have the most impact on CBS banking system decisions:<br /><br /></p>\r\n<ul><li> Functionality</li><li> Flexibility</li><li> Cost</li><li> Viability</li><li> Operational Performance</li><li> Program Management</li><li> Partner Management</li><li> Customer References</li></ul>\r\n<p class=\"align-left\"><br /><br /><br /></p>","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/Core_Banking_System1.png"}],"characteristics":[],"concurentProducts":[],"jobRoles":[],"organizationalFeatures":[],"complementaryCategories":[],"solutions":[],"materials":[],"useCases":[],"best_practices":[],"values":[],"implementations":[]},{"id":5511,"logoURL":"https://old.roi4cio.com/fileadmin/user_upload/cisco_firepower.jpg","logo":true,"scheme":false,"title":"Cisco Firepower 9300 Series","vendorVerified":0,"rating":"0.00","implementationsCount":0,"suppliersCount":0,"supplierPartnersCount":125,"alias":"cisco-firepower-9300-series","companyTitle":"Cisco","companyTypes":["supplier","vendor"],"companyId":170,"companyAlias":"cisco","description":"The Cisco Firepower® 9300 is a scalable (beyond 1 Tbps when clustered), carrier-grade, modular platform designed for service providers, high-performance computing centers, large data centers, campuses, high-frequency trading environments, and other environments that require low (less than 5-microsecond offload) latency and exceptional throughput. Cisco Firepower 9300 supports flow-offloading, programmatic orchestration, and the management of security services with RESTful APIs. It is also available in Network Equipment Building Standards (NEBS)-compliant configurations. The 9300 Series platforms can run either the Cisco® Adaptive Security Appliance (ASA) Firewall or Cisco Firepower Threat Defense (FTD). \r\n<p class=\"align-center\"><b>Features:</b></p>\r\n<b><i>Scalable multiservice security </i></b>\r\nEliminate security gaps. Integrate and provision multiple Cisco and Cisco partner security services dynamically across the network fabric. See and correlate policy, traffic, and events across multiple services. \r\n<b><i>Expandable security modules </i></b>\r\nFlexibly scale your security performance. Meet business agility needs and enable rapid provisioning. \r\n<b><i>Carrier-grade performance </i></b>\r\nNEBS-compliant configurations available. Elevate threat defense and network performance with low-latency, large flow handling, and orchestration of security services. Protect Evolved Programmable Network, Evolved Services Platform, and Application Centric Infrastructure architectures. \r\n<b>Benefits:</b>\r\n<ul> <li>Designed for service provider and data center deployments </li> <li>Threat inspection up to 90 Gbps </li> <li>Includes AVC, with AMP and URL options </li> <li>Fail-to-wire interfaces available </li> </ul>","shortDescription":"Modular security platform for service providers\r\n","type":null,"isRoiCalculatorAvaliable":false,"isConfiguratorAvaliable":false,"bonus":100,"usingCount":11,"sellingCount":18,"discontinued":0,"rebateForPoc":0,"rebate":0,"seo":{"title":"Cisco Firepower 9300 Series","keywords":"","description":"The Cisco Firepower® 9300 is a scalable (beyond 1 Tbps when clustered), carrier-grade, modular platform designed for service providers, high-performance computing centers, large data centers, campuses, high-frequency trading environments, and other environment","og:title":"Cisco Firepower 9300 Series","og:description":"The Cisco Firepower® 9300 is a scalable (beyond 1 Tbps when clustered), carrier-grade, modular platform designed for service providers, high-performance computing centers, large data centers, campuses, high-frequency trading environments, and other environment","og:image":"https://old.roi4cio.com/fileadmin/user_upload/cisco_firepower.jpg"},"eventUrl":"","translationId":5510,"dealDetails":null,"roi":null,"price":null,"bonusForReference":null,"templateData":[],"testingArea":"","categories":[{"id":59,"title":"SCADA - Supervisory Control And Data Acquisition","alias":"scada-supervisory-control-and-data-acquisition","description":"<span style=\"font-weight: bold; \">SCADA</span> stands for <span style=\"font-weight: bold; \">Supervisory Control and Data Acquisition</span>, a term which describes the basic functions of a SCADA system. Companies use SCADA systems to control equipment across their sites and to collect and record data about their operations. SCADA is not a specific technology, but a type of application. Any application that gets operating data about a system in order to control and optimise that system is a SCADA application. That application may be a petrochemical distillation process, a water filtration system, a pipeline compressor, or just about anything else.\r\nSCADA solutions typically come in a combination of software and hardware elements, such as programmable logic controllers (PLCs) and remote terminal units (RTUs). Data acquisition in SCADA starts with PLCs and RTUs, which communicate with plant floor equipment such as factory machinery and sensors. Data gathered from the equipment is then sent to the next level, such as a control room, where operators can supervise the PLC and RTU controls using human-machine interfaces (HMIs). HMIs are an important element of SCADA systems. They are the screens that operators use to communicate with the SCADA system.\r\n<p class=\"align-center\"><span style=\"font-weight: bold; \">The major components of a SCADA technology include:</span></p>\r\n<ul><li><span style=\"font-weight: bold;\">Master Terminal Unit (MTU).</span> It comprises a computer, PLC and a network server that helps MTU to communicate with the RTUs. MTU begins communication, collects and saves data, helps to interface with operators and to communicate data to other systems.</li><li><span style=\"font-weight: bold;\">Remote Terminal Unit (RTU).</span> RTU is used to collect information from these sensors and further sends the data to MTU. RTUs have the storage capacity facility. So, it stores the data and transmits the data when MTU sends the corresponding command.</li><li><span style=\"font-weight: bold;\">Communication Network (defined by its network topology).</span> In general, network means connection. When you tell a SCADA communication network, it is defined as a link between RTU in the field to MTU in the central location. The bidirectional wired or wireless communication channel is used for the networking purpose. Various other communication mediums like fiber optic cables, twisted pair cables, etc. are also used.</li></ul>\r\n<p class=\"align-center\"><span style=\"font-weight: bold; \">Objectives of Supervisory Control and Data Acquisition system</span></p>\r\n<ul><li><span style=\"font-weight: bold;\">Monitor:</span> SCADA control system continuously monitors the physical parameters</li><li><span style=\"font-weight: bold;\">Measure:</span> It measures the parameter for processing</li><li><span style=\"font-weight: bold;\">Data Acquisition:</span> It acquires data from RTU, data loggers, etc</li><li><span style=\"font-weight: bold;\">Data Communication:</span> It helps to communicate and transmit a large amount of data between MTU and RTU units</li><li><span style=\"font-weight: bold;\">Controlling:</span> Online real-time monitoring and controlling of the process</li><li><span style=\"font-weight: bold;\">Automation:</span> It helps for automatic transmission and functionality</li></ul>\r\n\r\n","materialsDescription":"<h1 class=\"align-center\">Who Uses SCADA?</h1>\r\nSCADA systems are used by industrial organizations and companies in the public and private sectors to control and maintain efficiency, distribute data for smarter decisions, and communicate system issues to help mitigate downtime. Supervisory control systems work well in many different types of enterprises because they can range from simple configurations to large, complex installations. They are the backbone of many modern industries, including:\r\n<ul><li>Energy</li><li>Food and beverage</li><li>Manufacturing</li><li>Oil and gas</li><li>Power</li><li>Recycling</li><li>Transportation</li><li>Water and waste water</li><li>And many more</li></ul>\r\nVirtually anywhere you look in today's world, there is some type of SCADA monitoring system running behind the scenes: maintaining the refrigeration systems at the local supermarket, ensuring production and safety at a refinery, achieving quality standards at a waste water treatment plant, or even tracking your energy use at home, to give a few examples. Effective SCADA systems can result in significant savings of time and money. Numerous case studies have been published highlighting the benefits and savings of using a modern SCADA software.\r\n<h1 class=\"align-center\">Benefits of using SCADA software</h1>\r\nUsing modern SCADA software provides numerous benefits to businesses, and helps companies make the most of those benefits. Some of these advantages include:\r\n<span style=\"font-weight: bold; \">Easier engineering:</span> An advanced supervisory control application such provides easy-to-locate tools, wizards, graphic templates and other pre-configured elements, so engineers can create automation projects and set parameters quickly, even if they don't have programming experience. In addition, you can also easily maintain and expand existing applications as needed. The ability to automate the engineering process allows users, particularly system integrators and original equipment manufacturers (OEM), to set up complex projects much more efficiently and accurately.\r\n<span style=\"font-weight: bold; \">Improved data management:</span> A high-quality SCADA system makes it easier to collect, manage, access and analyze your operational data. It can enable automatic data recording and provide a central location for data storage. Additionally, it can transfer data to other systems such as MES and ERP as needed. \r\n<span style=\"font-weight: bold; \">Greater visibility:</span> One of the main advantages of using SCADA software is the improvement in visibility into your operations. It provides you with real-time information about your operations and enables you to conveniently view that information via an HMI. SCADA monitoring can also help in generating reports and analyzing data.\r\n<span style=\"font-weight: bold; \">Enhanced efficiency:</span> A SCADA system allows you to streamline processes through automated actions and user-friendly tools. The data that SCADA provides allows you to uncover opportunities for improving the efficiency of the operations, which can be used to make long-term changes to processes or even respond to real-time changes in conditions.\r\n<span style=\"font-weight: bold; \">Increased usability:</span> SCADA systems enable workers to control equipment more quickly, easily and safely through an HMI. Rather than having to control each piece of machinery manually, workers can manage them remotely and often control many pieces of equipment from a single location. Managers, even those who are not currently on the floor, also gain this capability.\r\n<span style=\"font-weight: bold; \">Reduced downtime:</span> A SCADA system can detect faults at an early stage and push instant alerts to the responsible personnel. Powered by predictive analytics, a SCADA system can also inform you of a potential issue of the machinery before it fails and causes larger problems. These features can help improve the overall equipment effectiveness (OEE) and reduce the amount of time and cost on troubleshooting and maintenance.\r\n<span style=\"font-weight: bold;\">Easy integration:</span> Connectivity to existing machine environments is key to removing data silos and maximizing productivity. \r\n<span style=\"font-weight: bold;\">Unified platform:</span>All of your data is also available in one platform, which helps you to get a clear overview of your operations and take full advantage of your data. All users also get real-time updates locally or remotely, ensuring everyone on your team is on the same page.<br /><br />","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/SCADA__-_Supervisory_Control_And_Data_Acquisition.png"}],"characteristics":[],"concurentProducts":[],"jobRoles":[],"organizationalFeatures":[],"complementaryCategories":[],"solutions":[],"materials":[],"useCases":[],"best_practices":[],"values":[],"implementations":[]},{"id":5259,"logoURL":"https://old.roi4cio.com/fileadmin/user_upload/lenovo_logo.png","logo":true,"scheme":false,"title":"All-in-One PC Lenovo ThinkCentre M Series","vendorVerified":0,"rating":"0.00","implementationsCount":0,"suppliersCount":0,"supplierPartnersCount":26,"alias":"all-in-one-pc-lenovo-thinkcentre-m-series","companyTitle":"Lenovo","companyTypes":["vendor"],"companyId":318,"companyAlias":"lenovo","description":"<span style=\"font-weight: bold; \">Productivity enhancers</span>\r\nTackle spreadsheets, multiple presentations, and photo-editing with ease. With powerful Intel Core processors, the latest DDR4 computer memory, and SSD storage options, each ThinkCentre AIO is a powerful performer. Load and transfer files at lightning speed — essential for time-critical applications that require a large memory capacity or fast storage. Get things done — quickly and easily.\r\n<span style=\"font-weight: bold; \">Adapts to you</span>\r\nVersatile stands give you the freedom to use your display at any angle with tilt, height, and swivel functionality that can adapt to a range of working styles — whether sitting at a desk, or standing to serve customers, you’ll always have the best view.\r\n<span style=\"font-weight: bold;\">ThinkCentre M920z AIO</span>\r\n<span style=\"font-weight: bold;\">ThinkCentre M820z AIO</span>\r\n\r\n","shortDescription":"Lenovo M AIO Series. With their minimal footprint, professional appearance, and enterprise-level productivity, these all-in-ones are a welcome addition to the corporate desk.","type":null,"isRoiCalculatorAvaliable":false,"isConfiguratorAvaliable":false,"bonus":100,"usingCount":17,"sellingCount":4,"discontinued":0,"rebateForPoc":0,"rebate":0,"seo":{"title":"All-in-One PC Lenovo ThinkCentre M Series","keywords":"","description":"<span style=\"font-weight: bold; \">Productivity enhancers</span>\r\nTackle spreadsheets, multiple presentations, and photo-editing with ease. With powerful Intel Core processors, the latest DDR4 computer memory, and SSD storage options, each ThinkCentre AIO is a ","og:title":"All-in-One PC Lenovo ThinkCentre M Series","og:description":"<span style=\"font-weight: bold; \">Productivity enhancers</span>\r\nTackle spreadsheets, multiple presentations, and photo-editing with ease. With powerful Intel Core processors, the latest DDR4 computer memory, and SSD storage options, each ThinkCentre AIO is a ","og:image":"https://old.roi4cio.com/fileadmin/user_upload/lenovo_logo.png"},"eventUrl":"","translationId":5260,"dealDetails":null,"roi":null,"price":null,"bonusForReference":null,"templateData":[],"testingArea":"","categories":[{"id":471,"title":"Hardware","alias":"hardware","description":" Computer hardware includes the physical, tangible parts or components of a computer, such as the cabinet, central processing unit, monitor, keyboard, computer data storage, graphics card, sound card, speakers and motherboard. By contrast, software is instructions that can be stored and run by hardware. Hardware is so-termed because it is "hard" or rigid with respect to changes or modifications; whereas software is "soft" because it is easy to update or change. Intermediate between software and hardware is "firmware", which is software that is strongly coupled to the particular hardware of a computer system and thus the most difficult to change but also among the most stable with respect to consistency of interface. The progression from levels of "hardness" to "softness" in computer systems parallels a progression of layers of abstraction in computing.\r\nHardware is typically directed by the software to execute any command or instruction. A combination of hardware and software forms a usable computing system, although other systems exist with only hardware components.\r\nThe template for all modern computers is the Von Neumann architecture, detailed in a 1945 paper by Hungarian mathematician John von Neumann. This describes a design architecture for an electronic digital computer with subdivisions of a processing unit consisting of an arithmetic logic unit and processor registers, a control unit containing an instruction register and program counter, a memory to store both data and instructions, external mass storage, and input and output mechanisms. The meaning of the term has evolved to mean a stored-program computer in which an instruction fetch and a data operation cannot occur at the same time because they share a common bus. This is referred to as the Von Neumann bottleneck and often limits the performance of the system.","materialsDescription":" <span style=\"font-weight: bold; \">What does Hardware (H/W) mean?</span>\r\nHardware (H/W), in the context of technology, refers to the physical elements that make up a computer or electronic system and everything else involved that is physically tangible. This includes the monitor, hard drive, memory and CPU. Hardware works hand-in-hand with firmware and software to make a computer function.\r\n<span style=\"font-weight: bold; \">What are the types of computer systems?</span>\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">Personal computer</span></span>\r\nThe personal computer, also known as the PC, is one of the most common types of computer due to its versatility and relatively low price. Laptops are generally very similar, although they may use lower-power or reduced size components, thus lower performance.\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">Case</span></span>\r\nThe computer case encloses and holds most of the components of the system. It provides mechanical support and protection for internal elements such as the motherboard, disk drives, and power supplies, and controls and directs the flow of cooling air over internal components. The case is also part of the system to control electromagnetic interference radiated by the computer, and protects internal parts from electrostatic discharge. Large tower cases provide extra internal space for multiple disk drives or other peripherals and usually stand on the floor, while desktop cases provide less expansion room. All-in-one style designs include a video display built into the same case. Portable and laptop computers require cases that provide impact protection for the unit. A current development in laptop computers is a detachable keyboard, which allows the system to be configured as a touch-screen tablet. Hobbyists may decorate the cases with colored lights, paint, or other features, in an activity called case modding.\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">Power supply</span></span>\r\nA power supply unit (PSU) converts alternating current (AC) electric power to low-voltage direct current (DC) power for the internal components of the computer. Laptops are capable of running from a built-in battery, normally for a period of hours. The PSU typically uses a switched-mode power supply (SMPS), with power MOSFETs (power metal–oxide–semiconductor field-effect transistors) used in the converters and regulator circuits of the SMPS.\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">Motherboard</span></span>\r\nThe motherboard is the main component of a computer. It is a board with integrated circuitry that connects the other parts of the computer including the CPU, the RAM, the disk drives (CD, DVD, hard disk, or any others) as well as any peripherals connected via the ports or the expansion slots. The integrated circuit (IC) chips in a computer typically contain billions of tiny metal–oxide–semiconductor field-effect transistors (MOSFETs).\r\nComponents directly attached to or to part of the motherboard include:\r\n<ul><li><span style=\"font-weight: bold; \">The CPU (central processing unit)</span>, which performs most of the calculations which enable a computer to function, and is referred to as the brain of the computer which get a hold of program instruction from random-access memory (RAM), interprets and processes it and then send it backs to computer result so that the relevant components can carry out the instructions. The CPU is a microprocessor, which is fabricated on a metal–oxide–semiconductor (MOS) integrated circuit (IC) chip. It is usually cooled by a heat sink and fan, or water-cooling system. Most newer CPU include an on-die graphics processing unit (GPU). The clock speed of CPU governs how fast it executes instructions, and is measured in GHz; typical values lie between 1 GHz and 5 GHz. Many modern computers have the option to overclock the CPU which enhances performance at the expense of greater thermal output and thus a need for improved cooling.</li><li><span style=\"font-weight: bold; \">The chipset</span>, which includes the north bridge, mediates communication between the CPU and the other components of the system, including main memory; as well as south bridge, which is connected to the north bridge, and supports auxiliary interfaces and buses; and, finally, a Super I/O chip, connected through the south bridge, which supports the slowest and most legacy components like serial ports, hardware monitoring and fan control.</li><li><span style=\"font-weight: bold; \">Random-access memory (RAM)</span>, which stores the code and data that are being actively accessed by the CPU. For example, when a web browser is opened on the computer it takes up memory; this is stored in the RAM until the web browser is closed. It is typically a type of dynamic RAM (DRAM), such as synchronous DRAM (SDRAM), where MOS memory chips store data on memory cells consisting of MOSFETs and MOS capacitors. RAM usually comes on dual in-line memory modules (DIMMs) in the sizes of 2GB, 4GB, and 8GB, but can be much larger.</li><li><span style=\"font-weight: bold; \">Read-only memory (ROM)</span>, which stores the BIOS that runs when the computer is powered on or otherwise begins execution, a process known as Bootstrapping, or "booting" or "booting up". The ROM is typically a nonvolatile BIOS memory chip, which stores data on floating-gate MOSFET memory cells.</li><li><span style=\"font-weight: bold; \">The BIOS (Basic Input Output System)</span> includes boot firmware and power management firmware. Newer motherboards use Unified Extensible Firmware Interface (UEFI) instead of BIOS.</li><li><span style=\"font-weight: bold; \">Buses</span> that connect the CPU to various internal components and to expand cards for graphics and sound.</li><li><span style=\"font-weight: bold; \">The CMOS</span> (complementary MOS) battery, which powers the CMOS memory for date and time in the BIOS chip. This battery is generally a watch battery.</li><li><span style=\"font-weight: bold; \">The video card</span> (also known as the graphics card), which processes computer graphics. More powerful graphics cards are better suited to handle strenuous tasks, such as playing intensive video games or running computer graphics software. A video card contains a graphics processing unit (GPU) and video memory (typically a type of SDRAM), both fabricated on MOS integrated circuit (MOS IC) chips.</li><li><span style=\"font-weight: bold; \">Power MOSFETs</span> make up the voltage regulator module (VRM), which controls how much voltage other hardware components receive.</li></ul>\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">Expansion cards</span></span>\r\nAn expansion card in computing is a printed circuit board that can be inserted into an expansion slot of a computer motherboard or backplane to add functionality to a computer system via the expansion bus. Expansion cards can be used to obtain or expand on features not offered by the motherboard.\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">Storage devices</span></span>\r\nA storage device is any computing hardware and digital media that is used for storing, porting and extracting data files and objects. It can hold and store information both temporarily and permanently, and can be internal or external to a computer, server or any similar computing device. Data storage is a core function and fundamental component of computers.\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">Fixed media</span></span>\r\nData is stored by a computer using a variety of media. Hard disk drives (HDDs) are found in virtually all older computers, due to their high capacity and low cost, but solid-state drives (SSDs) are faster and more power efficient, although currently more expensive than hard drives in terms of dollar per gigabyte, so are often found in personal computers built post-2007. SSDs use flash memory, which stores data on MOS memory chips consisting of floating-gate MOSFET memory cells. Some systems may use a disk array controller for greater performance or reliability.\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">Removable media</span></span>\r\nTo transfer data between computers, an external flash memory device (such as a memory card or USB flash drive) or optical disc (such as a CD-ROM, DVD-ROM or BD-ROM) may be used. Their usefulness depends on being readable by other systems; the majority of machines have an optical disk drive (ODD), and virtually all have at least one Universal Serial Bus (USB) port.\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">Input and output peripherals</span></span>\r\nInput and output devices are typically housed externally to the main computer chassis. The following are either standard or very common to many computer systems.\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">Input</span></span>\r\nInput devices allow the user to enter information into the system, or control its operation. Most personal computers have a mouse and keyboard, but laptop systems typically use a touchpad instead of a mouse. Other input devices include webcams, microphones, joysticks, and image scanners.\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">Output device</span></span>\r\nOutput devices display information in a human readable form. Such devices could include printers, speakers, monitors or a Braille embosser.\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">Mainframe computer</span></span>\r\nA mainframe computer is a much larger computer that typically fills a room and may cost many hundreds or thousands of times as much as a personal computer. They are designed to perform large numbers of calculations for governments and large enterprises.\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">Departmental computing</span></span>\r\nIn the 1960s and 1970s, more and more departments started to use cheaper and dedicated systems for specific purposes like process control and laboratory automation.\r\n<span style=\"font-style: italic;\"><span style=\"font-weight: bold;\">Supercomputer</span></span>\r\nA supercomputer is superficially similar to a mainframe, but is instead intended for extremely demanding computational tasks. As of June 2018, the fastest supercomputer on the TOP500supercomputer list is the Summit, in the United States, with a LINPACK benchmarkscore of 122.3 PFLOPS Light, by around 29 PFLOPS.\r\nThe term supercomputer does not refer to a specific technology. Rather it indicates the fastest computations available at any given time. In mid 2011, the fastest supercomputers boasted speeds exceeding one petaflop, or 1 quadrillion (10^15 or 1,000 trillion) floating point operations per second. Supercomputers are fast but extremely costly, so they are generally used by large organizations to execute computationally demanding tasks involving large data sets. Supercomputers typically run military and scientific applications. Although costly, they are also being used for commercial applications where huge amounts of data must be analyzed. For example, large banks employ supercomputers to calculate the risks and returns of various investment strategies, and healthcare organizations use them to analyze giant databases of patient data to determine optimal treatments for various diseases and problems incurring to the country. ","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Hardware.jpg"},{"id":37,"title":"PC - personal computer","alias":"pc-personal-computer","description":"A personal computer (PC) is a multi-purpose computer whose size, capabilities, and price make it feasible for individual use. Personal computers are intended to be operated directly by an end user, rather than by a computer expert or technician. Unlike large costly minicomputer and mainframes, time-sharing by many people at the same time is not used with personal computers.\r\nInstitutional or corporate computer owners in the 1960s had to write their own programs to do any useful work with the machines. While personal computer users may develop their own applications, usually these systems run commercial software, free-of-charge software ("freeware"), which is most often proprietary, or free and open-source software, which is provided in "ready-to-run", or binary, form. Software for personal computers is typically developed and distributed independently from the hardware or operating system manufacturers. Many personal computer users no longer need to write their own programs to make any use of a personal computer, although end-user programming is still feasible. This contrasts with mobile systems, where software is often only available through a manufacturer-supported channel, and end-user program development may be discouraged by lack of support by the manufacturer.\r\nSince the early 1990s, Microsoft operating systems and Intel hardware have dominated much of the personal computer market, first with MS-DOS and then with Microsoft Windows. Alternatives to Microsoft's Windows operating systems occupy a minority share of the industry. These include Apple's macOS and free and open-source Unix-like operating systems.\r\nThe advent of personal computers and the concurrent Digital Revolution have significantly affected the lives of people in all countries.\r\n"PC" is an initialism for "personal computer". The IBM Personal Computer incorporated the designation in its model name. It is sometimes useful to distinguish personal computers of the "IBM Personal Computer" family from personal computers made by other manufacturers. For example, "PC" is used in contrast with "Mac", an Apple Macintosh computer. Since none of these Apple products were mainframes or time-sharing systems, they were all "personal computers" and not "PC" (brand) computers.","materialsDescription":" <span style=\"font-weight: bold;\">What types of stationary personal computers exist?</span>\r\n<span style=\"font-weight: bold;\">Desktops</span> - refer to the type of stationary PC. From the name it is clear that these are devices that are installed and work on a table and are not transferred during operation. As a rule, representatives of this group are high-performance powerful devices. They consist of a system unit (a rectangular box), to which a monitor, keyboard and mouse are connected.\r\n<span style=\"font-weight: bold;\">Servers</span> - this type of computer has its own specific tasks that it performs remotely or locally in place. The vast majority of servers are quite powerful machines. The appearance of the servers is slightly different from the usual PC - they are mounted in metal racks that look like furniture shelves. The racks themselves are placed in a special room (server room), the necessary temperature regime is necessarily maintained in it.\r\n<span style=\"font-weight: bold;\">Nettops</span> - refer to the type of stationary PC. The system unit is compact in size, usually with low power consumption and noise. Due to the small size, nettops have lower performance, but they fit perfectly into the home environment and do not occupy expensive office space.\r\n<span style=\"font-weight: bold;\">Microcomputers</span> are computers that fit in a miniature enclosure that looks very similar to a flash drive. The microcomputer itself does not have an output device, therefore, through an HDMI connection, it connects to a monitor or TV. Controls, such as a mouse or keyboard, are connected via the built-in USB ports or Bluetooth. Technical specifications depend on the configuration, as with any other PC.\r\n<span style=\"font-weight: bold;\">Monoblocks</span> - refer to the type of stationary PC. The system unit and the monitor are a single unit. Accessories and boards are placed in the compartment, which is mounted on the back of the monitor. It has an aesthetic appearance and does not take up much space.\r\n<span style=\"font-weight: bold;\">What are the types of portable personal computers?</span>\r\nA laptop computer can also be called portable. They differ from desktop dimensions and weight in a smaller direction and more capacious batteries, which is understandable because you need to carry it with you.\r\n<span style=\"font-weight: bold;\">Laptops and netbooks</span> - refer to the type of portable (laptop) PCs, have a battery for offline operation without using a network. The case is made in the form of a clamshell, a screen is installed at the top, and a keyboard at the bottom. Netbooks are smaller than laptops, respectively, have lower performance, although the battery life is longer.\r\n<span style=\"font-weight: bold;\">Tablet laptops</span> - refer to the type of portable (laptop) PC. The case consists of a touch screen display - Touchscreen. Their main purpose is surfing the Internet, watching videos, listening to audio, gaming and other applications. The compact dimensions make this group especially popular for travelers. Tablet laptops have a keyboard that either folds up or extends out of a niche under the screen. In tablets, the touchscreen is the input medium. For this group, battery life is important.\r\n<span style=\"font-weight: bold;\">Pocket PCs and smartphones</span> - belong to the type of portable (laptop) PC. Distinctive features are a small size and a large reserve of battery life. The input tool is either a touch screen or a retractable keyboard.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_PC.png"}],"characteristics":[],"concurentProducts":[],"jobRoles":[],"organizationalFeatures":[],"complementaryCategories":[],"solutions":[],"materials":[],"useCases":[],"best_practices":[],"values":[],"implementations":[]},{"id":3215,"logoURL":"https://old.roi4cio.com/fileadmin/user_upload/Oracle_Enterprise_Manager.png","logo":true,"scheme":false,"title":"Oracle Enterprise Manager","vendorVerified":0,"rating":"0.00","implementationsCount":0,"suppliersCount":0,"supplierPartnersCount":150,"alias":"oracle-enterprise-manager","companyTitle":"Oracle","companyTypes":["supplier","vendor"],"companyId":164,"companyAlias":"oracle","description":"<span style=\"color: rgb(97, 97, 97); \">Oracle Enterprise Manager (OEM or EM) is a set of web-based tools aimed at managing software and hardware produced by Oracle Corporation as well as by some non-Oracle entities.</span>\r\n<span style=\"font-weight: bold; \"><span style=\"color: rgb(97, 97, 97); \">Modern Systems Management</span></span>\r\nAs an IT operations professional, your job is more critical than ever because cloud operations are now a fact of life. From managing on-premises assets to deploying and managing new applications to the cloud, Oracle provides a comprehensive solution for managing your environments with Oracle Management Cloud and Oracle Enterprise Manager.\r\nOracle Enterprise Manager has traditionally provided deep management for the Oracle stack using an on-premises delivery method. Oracle Management Cloud is our next-generation, cloud-based management offering powered by machine learning and big data analytics.\r\n<span style=\"font-weight: bold; \">An Extensive Portfolio of Management Solutions</span>\r\n<span style=\"font-weight: bold; \">Cloud Management</span>\r\nFor existing Oracle Enterprise Manager customers, managing cloud assets is possible right within the cloud control user interface. For new customers, the easiest way to monitor cloud assets is to use Oracle Management Cloud.\r\n<span style=\"font-weight: bold; \">Application Management</span>\r\nManage Oracle packaged applications, including - but not limited to - Oracle E-Business Suite, Siebel, PeopleSoft, JD Edwards EnterpriseOne, Tax and Utilities, Oracle Communications applications, and Primavera.\r\n<span style=\"font-weight: bold; \">Middleware Management</span>\r\nOracle Enterprise Manager provides a comprehensive management solution for Oracle WebLogic Server, Oracle Fusion Middleware, and non-Oracle middleware technology such as Apache Tomcat, JBoss Application Server, and IBM WebSphere Application Server. The solution offers capabilities spanning configuration and compliance management, patching, provisioning, and performance management, as well as administration and auditing.\r\n<span style=\"font-weight: bold; \">Database Management</span>\r\nTake advantage of Oracle&rsquo;s time-tested and popular solutions including Diagnostics Pack, Tuning Pack, Real Application Testing, and related technologies to manage Oracle Databases.\r\n<span style=\"font-weight: bold; \">Hardware and Virtualization Management</span>\r\nManage physical and virtual server environments including Oracle Solaris and Oracle Linux operating systems and virtual environments (Solaris Zones and OVM for SPARC).\r\n<span style=\"font-weight: bold; \">Application Performance Management</span>\r\nManage web and Java applications built on Oracle WebLogic Server and Oracle Databases. Monitor web browser activity and application transactions to optimize user experience and application performance.\r\n<span style=\"font-weight: bold; \">Application Quality Management</span>\r\nA complete testing solution for Oracle Database, Oracle packaged applications, and custom web applications.\r\n<span style=\"font-weight: bold; \">Engineered Systems Management</span>\r\nManage Exadata Database Machine with comprehensive lifecycle management, from monitoring to management and ongoing maintenance.\r\n<span style=\"font-weight: bold; \">Lifecycle Management</span>\r\nPowerful capabilities to aid consolidation, enforce standardization, and deploy automation.\r\n<span style=\"font-weight: bold;\">Heterogeneous Management</span>\r\nExtend Oracle Enterprise Manager to monitor non-Oracle technologies. For customers new to Oracle Enterprise Manager, please review Oracle Management Cloud for cloud-based monitoring of heterogeneous environments.","shortDescription":"Enterprise Manager allows administrators to manage the work of complex information systems built primarily on the basis of Oracle technologies, including software products from other companies.","type":null,"isRoiCalculatorAvaliable":false,"isConfiguratorAvaliable":false,"bonus":100,"usingCount":0,"sellingCount":3,"discontinued":0,"rebateForPoc":0,"rebate":0,"seo":{"title":"Oracle Enterprise Manager","keywords":"","description":"<span style=\"color: rgb(97, 97, 97); \">Oracle Enterprise Manager (OEM or EM) is a set of web-based tools aimed at managing software and hardware produced by Oracle Corporation as well as by some non-Oracle entities.</span>\r\n<span style=\"font-weight: bold; \"><s","og:title":"Oracle Enterprise Manager","og:description":"<span style=\"color: rgb(97, 97, 97); \">Oracle Enterprise Manager (OEM or EM) is a set of web-based tools aimed at managing software and hardware produced by Oracle Corporation as well as by some non-Oracle entities.</span>\r\n<span style=\"font-weight: bold; \"><s","og:image":"https://old.roi4cio.com/fileadmin/user_upload/Oracle_Enterprise_Manager.png"},"eventUrl":"","translationId":3216,"dealDetails":null,"roi":null,"price":null,"bonusForReference":null,"templateData":[],"testingArea":"","categories":[{"id":243,"title":"Database Development and Management Tools","alias":"database-development-and-management-tools","description":" Many companies create various multi-functional applications to facilitate the management, development and administration of databases.\r\nMost relational databases consist of two separate components: a “back-end” where data is stored and a “front-end” —a user interface for interacting with data. This type of design is smart enough, as it parallels a two-level programming model that separates the data layer from the user interface and allows you to concentrate the software market directly on improving its products. This model opens doors for third parties who create their own applications for interacting with various databases.\r\nDatabase development tools can be used to create varieties of the following programs:\r\n<ul><li>client programs;</li><li>database servers and their individual components;</li><li>custom applications.</li></ul>\r\nThe programs of the first and second types are rather small since they are intended mainly for system programmers. The third type packages are much larger, but smaller than full-featured DBMS.\r\nThe development tools for custom applications include programming systems, various program libraries for various programming languages, and development automation packages (including client-server systems).<br />Database management system, abbr. DBMS (Eng. Database Management System, abbr. DBMS) - a set of software and linguistic tools for general or special purposes, providing management of the creation and use of databases.\r\nDBMS - a set of programs that allow you to create a database (DB) and manipulate data (insert, update, delete and select). The system ensures the safety, reliability of storage and data integrity, as well as provides the means to administer the database.","materialsDescription":" <span style=\"font-weight: bold;\">The main functions of the DBMS:</span>\r\n<ul><li>data management in external memory (on disk);</li><li>data management in RAM using disk cache;</li><li>change logging, backup and recovery of databases after failures;</li><li>support for database languages (data definition language, data manipulation language).</li></ul>\r\n<span style=\"font-weight: bold;\">The composition of the DBMS:</span>\r\nUsually, a modern DBMS contains the following components:\r\n<ul><li>the core, which is responsible for managing data in external and RAM and logging;</li><li>database language processor, which provides the optimization of requests for the extraction and modification of data and the creation, as a rule, of a machine-independent executable internal code;</li><li>a run-time support subsystem that interprets data manipulation programs that create a user interface with a DBMS;<br />service programs (external utilities) that provide a number of additional capabilities for maintaining an information system.</li></ul>","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/Database_Development_and_Management_Tools.png"}],"characteristics":[],"concurentProducts":[],"jobRoles":[],"organizationalFeatures":[],"complementaryCategories":[],"solutions":[],"materials":[],"useCases":[],"best_practices":[],"values":[],"implementations":[]},{"id":5520,"logoURL":"https://old.roi4cio.com/fileadmin/user_upload/IBM.png","logo":true,"scheme":false,"title":"IBM Proventia Network IPS","vendorVerified":0,"rating":"0.00","implementationsCount":0,"suppliersCount":0,"supplierPartnersCount":100,"alias":"ibm-proventia-network-ips","companyTitle":"IBM","companyTypes":["supplier","vendor"],"companyId":177,"companyAlias":"ibm","description":"The <b>IBM Proventia Network Intrusion Prevention System (IPS)</b> is designed to block Internet threats before they adversely affect your business. This system protects all three network levels: the internal perimeter, the external perimeter, and the remote segments. Featuring proprietary technology that combines performance that matches the speed of data transmission over the network, intelligent security features and multi-level security, IBM Internet Security Systems (ISS) provides proactive protection - protection against the threat before it reaches its goal. \r\n<ul> <li>Performance </li> <li> Security </li> <li> Reliability </li> <li> Implementation </li> <li> Management </li> <li> Confidentiality.</li> </ul>\r\n<b>Do not compromise when it comes to protection or performance</b>\r\nSecurity should increase network performance, not decrease it. The specialized device Proventia Network IPS has high bandwidth, low latency and long uptime, which ensures efficient and safe network operation. It has the following distinctive features:\r\n<ul> <li>Wide bandwidth range (10 Mbps - 5 Gbps) </li> <li> Intelligent in-depth packet inspection using FlowSmart technology </li> <li> Low latency </li> <li> Data transmission in case of system error or power outages</li> </ul>\r\nSecurity is only achieved with proactive protection. The Proventia Network IPS is designed to protect networks from all types of attacks, including: \r\n<ul> <li> Network worms </li> <li> spyware attacks </li> <li> P2P applications </li> <li> denial of service (DOS) and distributed attacks ( DDOS) </li> <li> cross-site scripting </li> <li> SQL injection </li> <li> phishing </li> <li> buffer overflow attacks </li> <li> tracking paths in web server directories</li> </ul>","shortDescription":"Securing networks with the IBM Proventia Network Intrusion Prevention System","type":null,"isRoiCalculatorAvaliable":false,"isConfiguratorAvaliable":false,"bonus":100,"usingCount":4,"sellingCount":3,"discontinued":0,"rebateForPoc":0,"rebate":0,"seo":{"title":"IBM Proventia Network IPS","keywords":"","description":"The <b>IBM Proventia Network Intrusion Prevention System (IPS)</b> is designed to block Internet threats before they adversely affect your business. This system protects all three network levels: the internal perimeter, the external perimeter, and the remote s","og:title":"IBM Proventia Network IPS","og:description":"The <b>IBM Proventia Network Intrusion Prevention System (IPS)</b> is designed to block Internet threats before they adversely affect your business. This system protects all three network levels: the internal perimeter, the external perimeter, and the remote s","og:image":"https://old.roi4cio.com/fileadmin/user_upload/IBM.png"},"eventUrl":"","translationId":5521,"dealDetails":null,"roi":null,"price":null,"bonusForReference":null,"templateData":[],"testingArea":"","categories":[],"characteristics":[],"concurentProducts":[],"jobRoles":[],"organizationalFeatures":[],"complementaryCategories":[],"solutions":[],"materials":[],"useCases":[],"best_practices":[],"values":[],"implementations":[]},{"id":3217,"logoURL":"https://old.roi4cio.com/fileadmin/user_upload/HPE_Apollo_4000.jpg","logo":true,"scheme":false,"title":"HPE Apollo 4000 Systems","vendorVerified":0,"rating":"0.00","implementationsCount":1,"suppliersCount":0,"supplierPartnersCount":451,"alias":"hpe-apollo-4000-systems","companyTitle":"Hewlett Packard Enterprise","companyTypes":["supplier","vendor"],"companyId":172,"companyAlias":"hewlett-packard-enterprise","description":"HPE Apollo 4000 systems are specifically optimised to service the data storage-centric workloads that are key to digital transformation – big data analytics and software-defined storage.\r\n<span style=\"font-weight: bold; \">Purpose-built for data storage-centric workloads</span>\r\nSecurely store and efficiently analyse your rapidly growing volumes of data for business value – all while meeting your data centre operations challenges – with Apollo 4000 systems.\r\n\r\n<span style=\"font-weight: bold; \">The Apollo 4000 portfolio</span>\r\nDensity-optimised platforms for data storage-centric workloads\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">HPE Apollo 4200 server</span></span>\r\nThe improved system architecture of this Gen10 server yields accelerated workload performance and enhanced security. The industry’s most versatile 2U platform, it delivers up to 28 LFF or 54 SFF drives in an easily serviceable, standard rack-depth chassis.\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">HPE Apollo 4510 system</span></span>\r\nThis system delivers the optimal feature set for enterprise data centre deployments at petabyte scale, including server-based object storage and secondary storage. It accommodates up to 60 LFF drives loaded into two front-accessible drawers for easy serviceability, all in a 4U standard-depth rack.\r\n\r\n<span style=\"font-weight: bold; \">Technical overview</span>\r\n<span style=\"font-weight: bold; \">Form factor</span>\r\n<ul><li>2U chassis (Apollo 4200)</li><li>4U chassis (Apollo 4510)</li></ul>\r\n<span style=\"font-weight: bold; \">Storage</span>\r\n<ul><li>Support for SAS, SATA, NVMe, and SSDs</li><li>Up to 28 LFF and 54 SFF drives in 2U (Apollo 4200)</li><li>Up to 60 LFF drives in 4U (Apollo 4510)</li></ul>\r\n<span style=\"font-weight: bold; \">CPU</span>\r\n<ul><li>Up to 2 Intel Xeon Scalable Processor</li></ul>\r\n<span style=\"font-weight: bold; \">Memory</span>\r\n<ul><li>Up to 1024 GB DDR4 memory (16 DIMMs)</li></ul>\r\n\r\n<br /><span style=\"font-weight: bold;\">HPE Apollo 4000 systems in action:</span>\r\n<span style=\"font-weight: bold;\">Big data and analytics solutions</span>\r\nAccelerate business insights and gain a competitive advantage – choose from multiple, modular Hadoop reference architectures to increase operational efficiencies, influence product development and quality, and securely manage big data workloads.\r\n<span style=\"font-weight: bold;\">General file and object storage</span>\r\nDrive value to your organisation and effectively address unstructured data storage requirements with Apollo-based file and object storage solutions spanning your needs from affordable NAS to durable petabyte-scale storage.\r\n<span style=\"font-weight: bold;\">High-performance computing and AI storage</span>\r\nProviding the necessary high-speed concurrent access to data, HPE offers a comprehensive portfolio of dedicated storage products that enable the full power of HPC by supporting clustered computing and distributed parallel computing.","shortDescription":"Apollo 4000 Systems are the servers and the systems that are purpose-built for big data analytics, software-defined storage, backup and archive, and other data storage-intensive workloads.","type":null,"isRoiCalculatorAvaliable":false,"isConfiguratorAvaliable":false,"bonus":100,"usingCount":19,"sellingCount":7,"discontinued":0,"rebateForPoc":0,"rebate":0,"seo":{"title":"HPE Apollo 4000 Systems","keywords":"","description":"HPE Apollo 4000 systems are specifically optimised to service the data storage-centric workloads that are key to digital transformation – big data analytics and software-defined storage.\r\n<span style=\"font-weight: bold; \">Purpose-built for data storage-centric","og:title":"HPE Apollo 4000 Systems","og:description":"HPE Apollo 4000 systems are specifically optimised to service the data storage-centric workloads that are key to digital transformation – big data analytics and software-defined storage.\r\n<span style=\"font-weight: bold; \">Purpose-built for data storage-centric","og:image":"https://old.roi4cio.com/fileadmin/user_upload/HPE_Apollo_4000.jpg"},"eventUrl":"","translationId":3218,"dealDetails":null,"roi":null,"price":null,"bonusForReference":null,"templateData":[],"testingArea":"","categories":[{"id":519,"title":"Density Optimized Server","alias":"density-optimized-server","description":" The high-density server system is a modern concept of building an economical and scalable computing equipment subsystem within the data processing center (hereinafter referred to as the data center).\r\nThe high-density server system includes server equipment, modules of the organization of network interaction, technologies of resource virtualization and has constructive opportunities to install all the components of a modern data center within a single structural unit (chassis).\r\nThe virtualization tools used and the adaptive management system combines the high-density server system resources for collective use in processing various combinations of workloads.\r\nThe high-density server system in the information system infrastructure allows achieving significant cost savings by compacting components and reducing the number of cable connections, jointly managing systems, using virtualization tools, reducing power and cooling costs, simplifying deployment and the possibility of rapid interchangeability of server equipment.\r\nThe high-density server system can be used as a subsystem of corporate data centers, as well as act as a computing center for an information system of a small company, thanks to its design features and applied technologies.","materialsDescription":" <span style=\"font-weight: bold;\">The High-Density Server System Structure</span>\r\nThe composition of the high-density server system includes:\r\n<ul><li>server equipment;</li><li>interconnect modules;</li><li>software (software);</li><li>management subsystem the high-density server system.</li></ul>\r\nConstructive the high-density server system is designed to install servers of special performance, called the "blade" (from the English "blade"). At the level of the system and application software, the “blade” does not differ from a typical server installed in a standard mounting rack.\r\nSSVP includes a universal chassis with redundant input-output systems, power, cooling and control, as well as blade servers and storage of similar performance. The use of the high-density server system means the provision of a functional management subsystem and services for installation, launch and maintenance.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Density_Optimized_Server.png"},{"id":35,"title":"Server","alias":"server","description":"In computing, a server is a computer program or a device that provides functionality for other programs or devices, called "clients". This architecture is called the client–server model, and a single overall computation is distributed across multiple processes or devices. Servers can provide various functionalities, often called "services", such as sharing data or resources among multiple clients, or performing computation for a client. A single server can serve multiple clients, and a single client can use multiple servers. A client process may run on the same device or may connect over a network to a server on a different device. Typical servers are database servers, file servers, mail servers, print servers, web servers, game servers, and application servers.\r\nClient–server systems are today most frequently implemented by (and often identified with) the request–response model: a client sends a request to the server, which performs some action and sends a response back to the client, typically with a result or acknowledgement. Designating a computer as "server-class hardware" implies that it is specialized for running servers on it. This often implies that it is more powerful and reliable than standard personal computers, but alternatively, large computing clusters may be composed of many relatively simple, replaceable server components.\r\nStrictly speaking, the term server refers to a computer program or process (running program). Through metonymy, it refers to a device used for (or a device dedicated to) running one or several server programs. On a network, such a device is called a host. In addition to server, the words serve and service (as noun and as verb) are frequently used, though servicer and servant are not. The word service (noun) may refer to either the abstract form of functionality, e.g. Web service. Alternatively, it may refer to a computer program that turns a computer into a server, e.g. Windows service. Originally used as "servers serve users" (and "users use servers"), in the sense of "obey", today one often says that "servers serve data", in the same sense as "give". For instance, web servers "serve web pages to users" or "service their requests".\r\nThe server is part of the client–server model; in this model, a server serves data for clients. The nature of communication between a client and server is request and response. This is in contrast with peer-to-peer model in which the relationship is on-demand reciprocation. In principle, any computerized process that can be used or called by another process (particularly remotely, particularly to share a resource) is a server, and the calling process or processes is a client. Thus any general purpose computer connected to a network can host servers. For example, if files on a device are shared by some process, that process is a file server. Similarly, web server software can run on any capable computer, and so a laptop or a personal computer can host a web server.\r\nWhile request–response is the most common client–server design, there are others, such as the publish–subscribe pattern. In the publish–subscribe pattern, clients register with a pub–sub server, subscribing to specified types of messages; this initial registration may be done by request–response. Thereafter, the pub–sub server forwards matching messages to the clients without any further requests: the server pushes messages to the client, rather than the client pulling messages from the server as in request–response.","materialsDescription":" <span style=\"font-weight: bold;\">What is a server?</span>\r\nA server is a software or hardware device that accepts and responds to requests made over a network. The device that makes the request, and receives a response from the server, is called a client. On the Internet, the term "server" commonly refers to the computer system which receives a request for a web document and sends the requested information to the client.\r\n<span style=\"font-weight: bold;\">What are they used for?</span>\r\nServers are used to manage network resources. For example, a user may set up a server to control access to a network, send/receive an e-mail, manage print jobs, or host a website. They are also proficient at performing intense calculations. Some servers are committed to a specific task, often referred to as dedicated. However, many servers today are shared servers which can take on the responsibility of e-mail, DNS, FTP, and even multiple websites in the case of a web server.\r\n<span style=\"font-weight: bold;\">Why are servers always on?</span>\r\nBecause they are commonly used to deliver services that are constantly required, most servers are never turned off. Consequently, when servers fail, they can cause the network users and company many problems. To alleviate these issues, servers are commonly set up to be fault-tolerant.\r\n<span style=\"font-weight: bold;\">What are the examples of servers?</span>\r\nThe following list contains links to various server types:\r\n<ul><li>Application server;</li><li>Blade server;</li><li>Cloud server;</li><li>Database server;</li><li>Dedicated server;</li><li>Domain name service;</li><li>File server;</li><li>Mail server;</li><li>Print server;</li><li>Proxy server;</li><li>Standalone server;</li><li>Web server.</li></ul>\r\n<span style=\"font-weight: bold;\">How do other computers connect to a server?</span>\r\nWith a local network, the server connects to a router or switch that all other computers on the network use. Once connected to the network, other computers can access that server and its features. For example, with a web server, a user could connect to the server to view a website, search, and communicate with other users on the network.\r\nAn Internet server works the same way as a local network server, but on a much larger scale. The server is assigned an IP address by InterNIC, or by a web host.\r\nUsually, users connect to a server using its domain name, which is registered with a domain name registrar. When users connect to the domain name (such as "computerhope.com"), the name is automatically translated to the server's IP address by a DNS resolver.\r\nThe domain name makes it easier for users to connect to the server because the name is easier to remember than an IP address. Also, domain names enable the server operator to change the IP address of the server without disrupting the way that users access the server. The domain name can always remain the same, even if the IP address changes.\r\n<span style=\"font-weight: bold;\">Where are servers stored?</span>\r\nIn a business or corporate environment, a server and other network equipment are often stored in a closet or glasshouse. These areas help isolate sensitive computers and equipment from people who should not have access to them.\r\nServers that are remote or not hosted on-site are located in a data center. With these types of servers, the hardware is managed by another company and configured remotely by you or your company.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Server.png"}],"characteristics":[],"concurentProducts":[],"jobRoles":[],"organizationalFeatures":[],"complementaryCategories":[],"solutions":[],"materials":[],"useCases":[],"best_practices":[],"values":[],"implementations":[]},{"id":5013,"logoURL":"https://old.roi4cio.com/fileadmin/user_upload/dell_emc_product.jpg","logo":true,"scheme":false,"title":"Dell EMC SourceOne","vendorVerified":0,"rating":"0.00","implementationsCount":1,"suppliersCount":0,"supplierPartnersCount":59,"alias":"dell-emc-sourceone","companyTitle":"Dell EMC","companyTypes":["vendor"],"companyId":955,"companyAlias":"dell-emc","description":"Dell EMC SourceOne data archiving enables organizations to efficiently capture, index, store, manage, retrieve and dispose both structured and unstructured data to meet enterprise needs. SourceOne provides seamless access to archive content from email, file, and Microsoft SharePoint. It ultimately helps companies reduce IT, operational, and labor costs, as well as meet both corporate management and regulatory requirements needs.\r\nSourceOne provides tools to accelerate search of unstructured content, increasing accuracy of discovery against deduplicated, centralized archives.<br />\r\n<span style=\"font-weight: bold; \">Key offerings:</span>\r\n<ul><li>Dell EMC SourceOne Email Management for Microsoft Exchange</li></ul>\r\n<ul><li>Dell EMC SourceOne Email Management for IBM Lotus Notes Domino</li></ul>\r\n<ul><li>Dell EMC SourceOne for File Systems</li></ul>\r\n<ul><li>Dell EMC SourceOne for Microsoft SharePoint</li></ul>\r\n<span style=\"font-weight: bold;\">Additional offerings:</span><br />\r\n\r\n<ul><li>Dell EMC SourceOne Discovery Manager</li></ul>\r\n<ul><li>Dell EMC SourceOne Email Supervisor</li></ul>\r\n\r\n<span style=\"font-weight: bold;\">BENEFITS</span>\r\n<span style=\"font-style: italic;\"><span style=\"font-weight: bold;\">Reduce Cost</span></span><br />\r\nSourceOne helps to reduce the overall cost of data ownership in two ways. It reduces primary storage cost by archiving aged email and information content to the less costly storage tiers and reduces the costs associated with data and information discovery during legal and eDiscovery processes.<br /><br /><span style=\"font-style: italic;\"><span style=\"font-weight: bold;\">Take Control of PST Files</span></span><br />\r\nSourceOne provides users unlimited mailboxes and improves server performance by eliminating duplicated, scattered PST files. Properly managing PST files minimizes the chance of critical business data loss and organizational compliance failures.<br /><br /><span style=\"font-style: italic;\"><span style=\"font-weight: bold;\">Reduce Legal and Compliance Risks</span></span><br />\r\nWhen it comes to litigation readiness, speed and efficiency matter. SourceOne allows users to properly index and search relevant business data with ElasticSearch technology and web-version Discovery Manager for quick litigation and compliance readiness.","shortDescription":"Dell EMC SOURCEONE: Email and Information Archiving for Storage and Discovery","type":null,"isRoiCalculatorAvaliable":false,"isConfiguratorAvaliable":false,"bonus":100,"usingCount":15,"sellingCount":11,"discontinued":0,"rebateForPoc":0,"rebate":0,"seo":{"title":"Dell EMC SourceOne","keywords":"","description":"Dell EMC SourceOne data archiving enables organizations to efficiently capture, index, store, manage, retrieve and dispose both structured and unstructured data to meet enterprise needs. SourceOne provides seamless access to archive content from email, file, a","og:title":"Dell EMC SourceOne","og:description":"Dell EMC SourceOne data archiving enables organizations to efficiently capture, index, store, manage, retrieve and dispose both structured and unstructured data to meet enterprise needs. SourceOne provides seamless access to archive content from email, file, a","og:image":"https://old.roi4cio.com/fileadmin/user_upload/dell_emc_product.jpg"},"eventUrl":"","translationId":5014,"dealDetails":null,"roi":null,"price":null,"bonusForReference":null,"templateData":[],"testingArea":"","categories":[{"id":307,"title":"Archiving Software","alias":"archiving-software","description":" Enterprise <span style=\"font-weight: bold;\">archiving software </span>is designed to assist in storing a company’s structured and unstructured data. By incorporating unstructured data (e.g., email messages and media files), enterprise information archiving software provides more complete archives of business data across the board. Data can be stored on premise with local data servers or on cloud servers, or using a hybrid of the two. These solutions are used throughout a business by any employee, since all teams should be archiving their data for, at minimum, auditing purposes. Data archiving software are typically implemented and maintained by a company’s data team, and they can be used by companies of any size.\r\nWhile similar to a backup software solution, archiving solution handles the original data as opposed to a copy of that data. To qualify for the data archiving solutions category, a product must: \r\n<ul><li>Store both structured and unstructured data</li><li>Provide data management options for archived data</li><li>Protect access to archived data</li></ul>","materialsDescription":"<h1 class=\"align-center\"> What is Archiving Software?</h1>\r\nArchiving Software supports enterprises in retaining and rapidly retrieving structured and unstructured data over time while complying with security standards and the like. File archiving may include images, messages (e.g. IMs, social media posts, etc.), emails, and content from web pages and social sites. Compliant data retention may require retaining data in its native form and context so that it can be understood.\r\nAlso called Enterprise Information Archiving (EIA), archiving software is designed to meet discovery requirements. That means that the archive must be searchable so that all stored data can be retrieved with context intact.\r\nArchiving software is most commonly a requirement for banking institutions and governments. More stringent privacy laws means that EIA has become a concern for private corporations as well. Archiving software will contain features overlapping Enterprise Search, Data Governance and eDiscovery, and some features in common with ECM.\r\n<h1 class=\"align-center\">What’s the Difference: Backup vs Archive</h1>\r\nBackups and archives serve different functions, yet it’s common to hear the terms used interchangeably in cloud storage. \r\nA <span style=\"font-weight: bold;\">backup </span>is a copy of your data that is made to protect against loss of that data. Typically, backups are made on a regular basis according to a time schedule or when the original data changes. The original data is not deleted, but older backups are often deleted in favor of newer backups.<br /><span style=\"font-weight: bold;\">The goal of a backup</span> is to make a copy of anything in current use that can’t afford to be lost. A backup of a desktop or mobile device might include just the user data so that a previous version of a file can be recovered if necessary.\r\nOn these types of devices an assumption is often made that the OS and applications can easily be restored from original sources if necessary (and/or that restoring an OS to a new device could lead to significant corruption issues). In a virtual server environment, a backup could include.\r\nAn <span style=\"font-weight: bold;\">archive </span>is a copy of data made for long-term storage and reference. The original data may or may not be deleted from the source system after the archive copy is made and stored, though it is common for the archive to be the only copy of the data. \r\nIn contrast to a backup whose purpose is to be able to return a computer or file system to a state it existed in previously, <span style=\"font-weight: bold;\">data archiving can have multiple purposes</span>. An archiving system can provide an individual or organization with a permanent record of important papers, legal documents, correspondence, and other matters.\r\nOften, archive program is used to meet information retention requirements for corporations and businesses. If a dispute or inquiry arises about a business practice, contract, financial transaction, or employee, the records pertaining to that subject can be obtained from the archive.\r\nAn archive is frequently used to ease the burden on faster and more frequently accessed data storage systems. Older data that is unlikely to be needed often is put on systems that don’t need to have the speed and accessibility of systems that contain data still in use. Archival storage systems are usually less expensive, as well, so a strong motivation is to save money on data storage.\r\nArchives are often created based on the age of the data or whether the project the data belongs to is still active. Data archiving solutions might send data to an archive if it hasn’t been accessed in a specified amount of time, when it has reached a certain age, if a person is no longer with the organization, or the files have been marked for storage because the project has been completed or closed.<br /><br /><br />","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/Archiving_Software.png"},{"id":301,"title":"Storage Software","alias":"storage-software","description":"Sooner or later, your small business will need more space for data storage. Information in the form of e-mails, documents, presentations, databases, graphics, audio files and spreadsheets is the lifeblood of most companies, and the applications that run and protect your business require a lot of disk space. In addition, a number of trends are fueling our growing hunger for storage:\r\n<ul><li>Recent government regulations, such as Sarbanes-Oxley, require businesses to maintain and back up a variety of data they might have otherwise deleted.</li><li>For legal reasons, many small businesses are now archiving e-mail messages dating back five or more years.</li><li>The pervasiveness of viruses and spyware requires ever-more vigilant backups--which requires ever-more storage capacity.</li><li>Each new version of a software application or operating system demands more hard-drive real estate than its predecessor.</li><li>The growing need to store large media files, such as video, and make them available to users on a network is generating demand for more sophisticated storage solutions.</li></ul>\r\nStoring information and managing its storage is critical to a company's behind-the-scenes success. Fortunately, there are many storage solutions software available.\r\n<span style=\"font-weight: bold; \">Online storage or Cloud Storage. </span>Cloud storage is storage space on commercial data center accessible from any computer with Internet access. Data storage programs are usually provided by a service provider. A limited storage space may be provided free with more space available for a subscription fee. Examples of service providers are Amazon S3, Google Drive, Sky Drive etc. \r\nBy backing up your most important files to a secure, remote server, you're protecting the data stored at your place of business. With cloud storage management software you can easily share large files with clients, partners and others by providing them with password-protected access to your online storage service, thereby eliminating the need to e-mail those large files. \r\n<span style=\"font-weight: bold; \">Network-attached storage software.</span>Network-attached storage (NAS) provides fast, simple, reliable access to data in an IP networking environment. Those storage software solutions can also offload file serving from other servers on your network, thereby increasing performance. A network storage software system allows you to consolidate storage, thereby increasing efficiency and reducing costs; simplify storage administration and data backup and recovery; and allow for easy scaling to meet growing storage requirements.\r\n<span style=\"font-weight: bold; \">Storage virtualization software.</span>The management of storage and data is becoming difficult and time consuming. Storage management tools helps to address this problem by facilitating easy backup, archiving and recovery tasks by consuming less time. Storage virtualization aggregates the functions and hides the actual complexity of the storage area network (SAN).\r\nStorage virtualization can be applied to any level of a SAN. The virtualization techniques can also be applied to different storage functions such as physical storage, RAID groups, logical unit numbers (LUNs), LUN subdivisions, storage zones and logical volumes, etc. ","materialsDescription":"<h1 class=\"align-center\"> Things You Need to Know About Data Storage Management</h1>\r\n<span style=\"font-weight: bold; \">Know your data.</span> When formulating your data storage management policy, ask the following questions:\r\n<ul><li>How soon do I need the data back if lost?</li><li>How fast do I need to access the data?</li><li>How long do I need to retain data?</li><li>How secure does it need to be?</li><li>What regulatory requirements need to be adhered to?</li></ul>\r\n<span style=\"font-weight: bold; \">Don't neglect unstructured data.</span>Think about how you might want to combine multi-structured data from your transactional systems with semi-structured or unstructured data from your email servers, network file systems, etc.\r\n<span style=\"font-weight: bold; \">Establish a data retention policy.</span> Setting the right data retention policies is a necessity for both internal data governance and legal compliance.\r\n<span style=\"font-weight: bold; \">Look for a solution that fits your data, not the other way around.</span> The storage and backup solutions you choose should be optimized for mobile and virtual platforms, in addition to desktops and laptops -- and provide a consistent experience across any platform, including mobile editing capabilities and intuitive experience across mobile devices, virtual desktops or desktops.\r\n<span style=\"font-weight: bold; \">Make sure your data is secure.</span> When managing data within any IT environment,software storage's security has to be the first priority. The data also needs to be encrypted so it cannot be read or used by unscrupulous third parties if it ever falls out of possession or is hacked (which does happen).\r\n<h1 class=\"align-center\">What is Self-Storage Software?</h1>\r\nA typical self-storage management software is a system that provides ability to manage storage units and their state (available, rented, reserved or disabled), customers with their balance and reporting. Self-storage management software can also have additional features such as point of sale, customer notes, digital signature, insurance, payment processing, accounting, etc.\r\n\r\n","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/Storage_Software__1_.png"}],"characteristics":[],"concurentProducts":[],"jobRoles":[],"organizationalFeatures":[],"complementaryCategories":[],"solutions":[],"materials":[],"useCases":[],"best_practices":[],"values":[],"implementations":[]},{"id":4760,"logoURL":"https://old.roi4cio.com/fileadmin/user_upload/cisco_logo.png","logo":true,"scheme":false,"title":"Cisco Connected Mobile Experiences (CMX)","vendorVerified":0,"rating":"0.00","implementationsCount":1,"suppliersCount":0,"supplierPartnersCount":125,"alias":"cisco-connected-mobile-experiences-cmx","companyTitle":"Cisco","companyTypes":["supplier","vendor"],"companyId":170,"companyAlias":"cisco","description":"Cisco Connected Mobile Experiences turns the industry-leading wireless infrastructure into an intelligent platform that not only provides a reliable connection, but also provides analytic customer information that you can use to grow your business. As the undisputed leader in the Wi-Fi market with over seven years of experience in Wi-Fi location, Cisco is a trusted partner\r\nCisco Connected Mobile Experiences (CMX) uses a high-density wireless network with the Cisco® Mobility Services Engine, which enables organizations to collect aggregated location data for Wi-Fi users. CMX Analytics is a data visualization module that helps organizations use the network as a source of data for business analysis, highlight behavioral patterns and trends, which, in turn, can help businesses make informed decisions about how to improve customer service and improve their quality. service.<br /><span style=\"font-weight: bold;\"><br />Benefits</span>\r\nWith the CMX solution, you can:\r\n<ul><li>Analyze business performance and optimize marketing activities through quantitative analysis of activity at your facility, for example, determining the patency of a particular store</li></ul>\r\n<ul><li>Increase the profitability per square meter by optimizing the location using the detailed traffic of the outlet, the conversion rate of visitors into customers, as well as other information, up to specific zones, as well as quantifying the implementation of changes</li></ul>\r\n<ul><li>Increase customer satisfaction by ensuring that there are enough staff during peak periods</li></ul>\r\n<ul><li>Increase profitability using location data for optimal mobile marketing campaigns.</li></ul>","shortDescription":"Thanks to Cisco CMX solutions, Wi-Fi from a familiar means of accessing the network can turn into a powerful tool for analytics, encouraging customers and generating additional profit.","type":null,"isRoiCalculatorAvaliable":false,"isConfiguratorAvaliable":false,"bonus":100,"usingCount":9,"sellingCount":9,"discontinued":0,"rebateForPoc":0,"rebate":0,"seo":{"title":"Cisco Connected Mobile Experiences (CMX)","keywords":"","description":"Cisco Connected Mobile Experiences turns the industry-leading wireless infrastructure into an intelligent platform that not only provides a reliable connection, but also provides analytic customer information that you can use to grow your business. As the undi","og:title":"Cisco Connected Mobile Experiences (CMX)","og:description":"Cisco Connected Mobile Experiences turns the industry-leading wireless infrastructure into an intelligent platform that not only provides a reliable connection, but also provides analytic customer information that you can use to grow your business. As the undi","og:image":"https://old.roi4cio.com/fileadmin/user_upload/cisco_logo.png"},"eventUrl":"","translationId":4761,"dealDetails":null,"roi":null,"price":null,"bonusForReference":null,"templateData":[],"testingArea":"","categories":[{"id":34,"title":"ITSM - IT Service Management","alias":"itsm-it-service-management","description":"<span style=\"font-weight: bold; \">IT service management (ITSM)</span> is the process of designing, delivering, managing, and improving the IT services an organization provides to its end users. ITSM is focused on aligning IT processes and services with business objectives to help an organization grow.\r\nITSM positions IT services as the key means of delivering and obtaining value, where an internal or external IT service provider works with business customers, at the same time taking responsibility for the associated costs and risks. ITSM works across the whole lifecycle of a service, from the original strategy, through design, transition and into live operation.\r\nTo ensure sustainable quality of IT services, ITSM establishes a set of practices, or processes, constituting a service management system. There are industrial, national and international standards for IT service management solutions, setting up requirements and good practices for the management system. \r\nITSM system is based on a set of principles, such as focusing on value and continual improvement. It is not just a set of processes – it is a cultural mindset to ensure that the desired outcome for the business is achieved. \r\n<span style=\"font-weight: bold; \">ITIL (IT Infrastructure Library)</span> is a framework of best practices and recommendations for managing an organization's IT operations and services. IT service management processes, when built based on the ITIL framework, pave the way for better IT service operations management and improved business. To summarize, ITIL is a set of guidelines for effective IT service management best practices. ITIL has evolved beyond the delivery of services to providing end-to-end value delivery. The focus is now on the co-creation of value through service relationships. \r\n<p class=\"align-center\"><span style=\"font-weight: bold; \">ITSM processes typically include five stages, all based on the ITIL framework:</span></p>\r\n<span style=\"font-weight: bold; \">ITSM strategy.</span> This stage forms the foundation or the framework of an organization's ITSM process building. It involves defining the services that the organization will offer, strategically planning processes, and recognizing and developing the required assets to keep processes moving. \r\n<span style=\"font-weight: bold; \">Service design.</span> This stage's main aim is planning and designing the IT services the organization offers to meet business demands. It involves creating and designing new services as well as assessing current services and making relevant improvements.\r\n<span style=\"font-weight: bold; \">Service transition.</span> Once the designs for IT services and their processes have been finalized, it's important to build them and test them out to ensure that processes flow. IT teams need to ensure that the designs don't disrupt services in any way, especially when existing IT service processes are upgraded or redesigned. This calls for change management, evaluation, and risk management. \r\n<span style=\"font-weight: bold; \">Service operation. </span>This phase involves implementing the tried and tested new or modified designs in a live environment. While in this stage, the processes have already been tested and the issues fixed, but new processes are bound to have hiccups—especially when customers start using the services. \r\n<span style=\"font-weight: bold;\">Continual service improvement (CSI).</span> Implementing IT processes successfully shouldn't be the final stage in any organization. There's always room for improvement and new development based on issues that pop up, customer needs and demands, and user feedback.\r\n\r\n","materialsDescription":"<h1 class=\"align-center\">Benefits of efficient ITSM processes</h1>\r\nIrrespective of the size of business, every organization is involved in IT service management in some way. ITSM ensures that incidents, service requests, problems, changes, and IT assets—in addition to other aspects of IT services—are managed in a streamlined way.\r\nIT teams in your organization can employ various workflows and best practices in ITSM, as outlined in ITIL. Effective IT service management can have positive effects on an IT organization's overall function.\r\nHere are the 10 key benefits of ITSM:\r\n<ul><li> Lower costs for IT operations</li><li> Higher returns on IT investments</li><li> Minimal service outages</li><li> Ability to establish well-defined, repeatable, and manageable IT processes</li><li> Efficient analysis of IT problems to reduce repeat incidents</li><li> Improved efficiency of IT help desk teams</li><li> Well-defined roles and responsibilities</li><li> Clear expectations on service levels and service availability</li><li> Risk-free implementation of IT changes</li><li> Better transparency into IT processes and services</li></ul>\r\n<h1 class=\"align-center\">How to choose an ITSM tool?</h1>\r\nWith a competent IT service management goal in mind, it's important to invest in a service desk solution that caters to your business needs. It goes without saying, with more than 150 service desk tools to choose from, selecting the right one is easier said than done. Here are a few things to keep in mind when choosing an ITSM products:\r\n<span style=\"font-weight: bold; \">Identify key processes and their dependencies. </span>Based on business goals, decide which key ITSM processes need to be implemented and chart out the integrations that need to be established to achieve those goals. \r\n<span style=\"font-weight: bold; \">Consult with ITSM experts.</span> Participate in business expos, webinars, demos, etc., and educate yourself about the various options that are available in the market. Reports from expert analysts such as Gartner and Forrester are particularly useful as they include reviews of almost every solution, ranked based on multiple criteria.\r\n<span style=\"font-weight: bold; \">Choose a deployment option.</span> Every business has a different IT infrastructure model. Selecting an on-premises or software as a service (SaaS IT service management) tool depends on whether your business prefers to host its applications and data on its own servers or use a public or private cloud.\r\n<span style=\"font-weight: bold; \">Plan ahead for the future.</span> Although it's important to consider the "needs" primarily, you shouldn't rule out the secondary or luxury capabilities. If the ITSM tool doesn't have the potential to adapt to your needs as your organization grows, it can pull you back from progressing. Draw a clear picture of where your business is headed and choose an service ITSM that is flexible and technology-driven.\r\n<span style=\"font-weight: bold;\">Don't stop with the capabilities of the ITSM tool.</span> It might be tempting to assess an ITSM tool based on its capabilities and features but it's important to evaluate the vendor of the tool. A good IT support team, and a vendor that is endorsed for their customer-vendor relationship can take your IT services far. Check Gartner's magic quadrant and other analyst reports, along with product and support reviews to ensure that the said tool provides good customer support.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_ITSM.png"},{"id":3,"title":"MDM - Mobile Device Management","alias":"mdm-mobile-device-management","description":" <span style=\"font-weight: bold; \">Mobile device management (MDM)</span> is an industry term for the administration of mobile devices, such as smartphones, tablet computers and laptops. Device management system is usually implemented with the use of a third party product that has management features for particular vendors of mobile devices.\r\nMDM is typically a deployment of a combination of on-device applications and configurations, corporate policies and certificates, and backend infrastructure, for the purpose of simplifying and enhancing the IT management of end user devices. In modern corporate IT environments, the sheer number and diversity of managed devices (and user behavior) has motivated device management tools that allow the management of devices and users in a consistent and scalable way. The overall role of MDM is to increase device supportability, security, and corporate functionality while maintaining some user flexibility.\r\nMany organizations administer devices and applications using MDM products/services. Mobile device management software primarily deals with corporate data segregation, securing emails, securing corporate documents on devices, enforcing corporate policies, integrating and managing mobile devices including laptops and handhelds of various categories. MDM implementations may be either on-premises or cloud-based.\r\nMDM functionality can include over-the-air distribution of applications, data and configuration settings for all types of mobile devices, including mobile phones, smartphones, tablet computers, ruggedized mobile computers, mobile printers, mobile POS devices, etc. Most recently laptops and desktops have been added to the list of systems supported as Mobile Device Management becomes more about basic device management and less about the mobile platform itself. \r\nSome of the <span style=\"font-weight: bold; \">core functions</span> of mobile management software include:\r\n<ul><li>Ensuring that diverse user equipment is configured to a consistent standard/supported set of applications, functions, or corporate policies</li><li>Updating equipment, applications, functions, or policies in a scalable manner</li><li>Ensuring that users use applications in a consistent and supportable manner</li><li>Ensuring that equipment performs consistently</li><li>Monitoring and tracking equipment (e.g. location, status, ownership, activity)</li><li>Being able to efficiently diagnose and troubleshoot equipment remotely</li></ul>\r\nDevice management solutions are leveraged for both company-owned and employee-owned (Bring Your Own Device) devices across the enterprise or mobile devices owned by consumers. Consumer demand for BYOD is now requiring a greater effort for MDM and increased security for both the devices and the enterprise they connect to, especially since employers and employees have different expectations concerning the types of restrictions that should be applied to mobile devices.\r\nBy controlling and protecting the data and configuration settings of all mobile devices in a network, enterprise device management software can reduce support costs and business risks. The intent of MDM is to optimize the functionality and security of a mobile communications network while minimizing cost and downtime.\r\nWith mobile devices becoming ubiquitous and applications flooding the market, mobile monitoring is growing in importance. The use of mobile device management across continues to grow at a steady pace, and is likely to register a compound annual growth rate (CAGR) of nearly 23% through 2028. The US will continue to be the largest market for mobile device management globally. ","materialsDescription":"<h1 class=\"align-center\">How Mobile Device Management works?</h1>\r\nMobile device management relies on endpoint software called an MDM agent and an MDM server that lives in a data center. IT administrators configure policies through the MDM server's management console, and the server then pushes those policies over the air to the MDM agent on the device. The agent applies the policies to the device by communicating with application programming interfaces (APIs) built directly into the device operating system.\r\nSimilarly, IT administrators can deploy applications to managed devices through the MDM server. Mobile software management emerged in the early 2000s as a way to control and secure the personal digital assistants and smartphones that business workers began to use. The consumer smartphone boom that started with the launch of the Apple iPhone in 2007 led to the bring your own device trend, which fueled further interest in MDM.\r\nModern MDM management software supports not only smartphones but also tablets, Windows 10 and macOS computers and even some internet of things devices. The practice of using MDM to control PCs is known as unified endpoint management.\r\n<h1 class=\"align-center\">Key Benefits of Mobile Device Management Software</h1>\r\n<span style=\"font-weight: bold;\">Reduce IT Administration.</span> Instead of manually configuring and testing each new mobile device, mobile device software takes care of the repetitive tasks for you. That gives IT staff more time to work on challenging projects that improve productivity.<span style=\"font-weight: bold;\"></span> \r\n<span style=\"font-weight: bold;\">Improve End-user Productivity. </span>Mobile device management helps end users become more productive because the process of requesting new mobile devices can be cut down from days to hours. Once end users have the device in their hands, mobile device management program helps them get set up on their corporate network much faster. That means less time waiting to get access to email, internal websites, and calendars.<span style=\"font-weight: bold;\"></span> \r\n<span style=\"font-weight: bold;\">Reduce IT Risk.</span> Mobile devices, especially if your organization allows “Bring Your Own Device” (BYOD), create increased risk exposures. Typically, IT managers respond to these risks in one of two ways, neither of which help. First, you may say “no” to mobile device requests. That’s a fast way to become unpopular. Second, you may take a manual approach to review and oversee each device.<span style=\"font-weight: bold;\"></span> \r\n<span style=\"font-weight: bold;\">Enable Enterprise Growth. </span>If your enterprise added a thousand employees this quarter through hiring, acquisition, or other changes, could IT handle the challenge? If you’re honest, you can probably imagine going through plenty of struggles and missing SLAs. That kind of disappointment and missed service expectations make end users respect IT less. \r\nBy using enterprise device management thoroughly, you'll enable enterprise growth. You'll have the systems and processes to manage 100 users or 10,000 users. That means IT will be perceived as enabling growth not standing in the way.\r\n\r\n","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_MDM_Mobile_Device_Management.png"},{"id":71,"title":"CRM - Customer Relationship Management","alias":"crm-customer-relationship-management","description":"<span style=\"font-weight: bold;\">Customer service</span> is the provision of service to customers before, during and after a purchase. The perception of success of such interactions is dependent on employees "who can adjust themselves to the personality of the guest". Customer service concerns the priority an organization assigns to customer service relative to components such as product innovation and pricing. In this sense, an organization that values good customer service may spend more money in training employees than the average organization or may proactively interview customers for feedback.\r\nA <span style=\"font-weight: bold;\">customer support</span> is a range of customer services to assist customers in making cost effective and correct use of a product. It includes assistance in planning, installation, training, trouble shooting, maintenance, upgrading, and disposal of a product. These services even may be done at customer's side where he/she uses the product or service. In this case it is called "at home customer services" or "at home customer support."\r\nRegarding technology, products such as mobile phones, televisions, computers, software products or other electronic or mechanical goods, it is termed technical support. \r\nCustomer service may be provided by a person (e.g., sales and service representative), or by automated means, such as kiosks, Internet sites, and apps.\r\n<span style=\"font-weight: bold;\">CRM </span>(Customer Relationship Management) is an approach to manage a company's interaction with current and potential customers. It uses data analysis about customers' history with a company to improve business relationships with customers, specifically focusing on customer retention and ultimately driving sales growth.\r\nOne important aspect of the CRM approach is the systems of CRM that compile data from a range of different communication channels, including a company's website, telephone, email, live chat, marketing materials and more recently, social media. Through the CRM approach and the systems used to facilitate it, businesses learn more about their target audiences and how to best cater to their needs.\r\nCRM helps users focus on their organization’s relationships with individual people including customers, service users, colleagues, or suppliers.\r\nWhen people talk about customer relationship management system, they might mean any of three things: \r\n<ul><li><span style=\"font-weight: bold;\">CRM as Technology</span>: This is a technology product, often in the cloud, that teams use to record, report and analyse interactions between the company and users. This is also called a CRM system or solution.</li><li><span style=\"font-weight: bold;\">CRM as a Strategy</span>: This is a business’ philosophy about how relationships with customers and potential customers should be managed. </li><li><span style=\"font-weight: bold;\">CRM as a Process</span>: Think of this as a system a business adopts to nurture and manage those relationships.</li></ul>\r\n<br /><br /><br />","materialsDescription":"<h1 class=\"align-center\"><span style=\"font-weight: normal;\">Why is CRM important?</span></h1>\r\nCRM management system enables a business to deepen its relationships with customers, service users, colleagues, partners and suppliers.\r\nForging good relationships and keeping track of prospects and customers is crucial for customer acquisition and retention, which is at the heart of a CRM’s function. You can see everything in one place — a simple, customizable dashboard that can tell you a customer’s previous history with you, the status of their orders, any outstanding customer service issues, and more.\r\nGartner predicts that by 2021, CRM technology will be the single largest revenue area of spending in enterprise software. If your business is going to last, you know that you need a strategy for the future. For forward-thinking businesses, CRM is the framework for that strategy.\r\n<h1 class=\"align-center\"><span style=\"font-weight: normal;\">What are the benefits of CRM?</span></h1>\r\nBy collecting and organising data about customer interactions, making it accessible and actionable for all, and facilitating analysis of that data, CRM offers many benefits and advantages.<br />The benefits and advantages of CRM include:\r\n<ul><li>Enhanced contact management</li><li>Cross-team collaboration</li><li>Heightened productivity</li><li>Empowered sales management</li><li>Accurate sales forecasting</li><li>Reliable reporting</li><li>Improved sales metrics</li><li>Increased customer satisfaction and retention</li><li>Boosted marketing ROI</li><li>Enriched products and services</li></ul>\r\n<h1 class=\"align-center\"><span style=\"font-weight: normal;\">What are the key features of most popular CRM software programs?</span></h1>\r\nWhile many CRM solutions differ in their specific value propositions — depending on your business size, priority function, or industry type — they usually share some core features. These, in fact, are the foundation of any top CRM software, without which you might end up using an inferior app or an over-rated address book. So, let’s discuss the key features you need to look for when figuring out the best CRM software for your business.\r\n\r\n<ul><li><span style=\"font-weight: bold;\">Contact management</span>. The best CRM solutions aren’t just an address book that only organizes contact details. It manages customer data in a centralized place and gives you a 360-degree view of your customers. You should be able to organize customers’ personal information, demographics, interactions, and transactions in ways that are meaningful to your goals or processes. Moreover, a good contact management feature lets you personalize your outreach campaign. By collecting personal, social, and purchase data, it will help you to segment target audience groups in different ways.</li></ul>\r\n\r\n<ul><li><span style=\"font-weight: bold;\">Reporting and dashboards</span>. These features of customer relationship management allow you to use analytics to interpret customer data. Reporting is very useful if you want to consolidate disparate data and churn out insights in different visualizations. This lets you make better decisions or proactively deal with market trends and customer behavioral patterns. The more visual widgets a CRM software has, the better you can present reports. Furthermore, a best customer relationship management software will generate real-time data, making reporting more accurate and timely. Reporting also keeps you tab on sales opportunities like upsell, resell, and cross-sell, especially when integrated with e-commerce platforms.</li></ul>\r\n\r\n<ul><li><span style=\"font-weight: bold;\">Lead management</span>. These features let you manage leads all the way to win-loss stage. They pave a clear path to conversion, so you can quickly assess how the business is performing. One of the main three legs that comprises the best client relationship management software (the other two being contact management and reporting), lead management unburdens the sales team from follow-ups, tracking, and repetitive tasks.</li></ul>\r\n\r\n<ul><li><span style=\"font-weight: bold;\">Deals and tasks</span>. Deals and tasks are closely associated with leads. Deals are leads at the negotiation stage, so it’s critical to keep a close eye on their associated tasks for a higher chance of conversion.<br />CRM software tools should also let you track both deals and tasks in their respective windows or across the sales stages. Whether you’re viewing a contact or analyzing the sales pipeline, you should be able to immediately check the deal’s tasks and details. Deals and tasks should also have user permissions to protect leaks of sensitive data. Similarly, alerts are critical to tasks so deadlines are met. Notifications are usually sent via email or prominently displayed on the user’s dashboard.</li></ul>\r\n\r\n<ul><li><span style=\"font-weight: bold;\">Campaign management</span>. Solid CRM software will integrate this feature to enable marketing processes from outreach concept to A/B testing to deployment and to post analysis. This will allow you to sort campaigns to target segments in your contacts and define deployment strategies. You will also be able to define metrics for various channels, then plow back the insights generated by post-campaign analytics into planning more campaigns.<br />Recurring outreach efforts can also be automated. For instance, you can set to instantly appropriate content to contacts based on their interest or send tiered autoresponders based on campaign feedback.</li></ul>\r\n\r\n<ul><li><span style=\"font-weight: bold;\">Email management</span>. By integrating with popular email clients like Gmail and Outlook, CRM solutions can capture email messages and sort important details that can be saved in contacts or synced with leads. They can also track activities like opened emails, forwarded emails, clicked links, and downloaded files. Emails can also be qualified for prospecting.</li></ul>\r\n\r\n<ul><li><span style=\"font-weight: bold;\">Social media management. </span>Popular CRM systems feature an integrated social media management where you can view different social media pages from the CRM’s interface. This is a convenient way to post, reply on, and manage all your pages. Likewise, this feature gives you a better perspective on how customers are interacting with your brand. A glean of their likes and dislikes, interests, shares, and public conversations helps you to assess customer biases and preferences. Customers are also increasingly using social media to contact companies; hence, a good CRM should alert you for brand mentions.</li></ul>\r\n\r\n<ul><li><span style=\"font-weight: bold;\">Mobile access</span>. With more users accessing apps via mobile devices, many vendors have been prioritizing mobile-first platforms. Emergence Capital Partners study found over 300 mobile-first apps so far and CRM is definitely one their targets. Many CRM solutions have both Android and iOS apps. Mobile access works in two ways to be highly appreciated: accessing data and inputting data while on location. Field sales with the latest sales information on hand may be able to interest prospects better. Conversely, sales reps can quickly update deals across the pipeline even as they come off a client meeting.</li></ul>\r\n\r\n","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/CRM_-_Customer_Relationship_Management.png"},{"id":69,"title":"Business Analytics","alias":"business-analytics","description":"Business Analytics is “the study of data through statistical and operations analysis, the formation of predictive models, application of optimization techniques, and the communication of these results to customers, business partners, and college executives.” Business Analytics requires quantitative methods and evidence-based data for business modeling and decision making; as such, Business Analytics requires the use of Big Data.\r\nSAS describes Big Data as “a term that describes the large volume of data – both structured and unstructured – that inundates a business on a day-to-day basis.” What’s important to keep in mind about Big Data is that the amount of data is not as important to an organization as the analytics that accompany it. When companies analyze Big Data, they are using Business Analytics to get the insights required for making better business decisions and strategic moves.\r\nCompanies use Business Analytics (BA) to make data-driven decisions. The insight gained by BA enables these companies to automate and optimize their business processes. In fact, data-driven companies that utilize Business Analytics achieve a competitive advantage because they are able to use the insights to:\r\n<ul><li>Conduct data mining (explore data to find new patterns and relationships)</li><li>Complete statistical analysis and quantitative analysis to explain why certain results occur</li><li>Test previous decisions using A/B testing and multivariate testing</li><li>Make use of predictive modeling and predictive analytics to forecast future results</li></ul>\r\nBusiness Analytics also provides support for companies in the process of making proactive tactical decisions, and BA makes it possible for those companies to automate decision making in order to support real-time responses.","materialsDescription":"<span style=\"font-weight: bold; \">What does Business Analytics (BA) mean?</span>\r\nBusiness analytics (BA) refers to all the methods and techniques that are used by an organization to measure performance. Business analytics are made up of statistical methods that can be applied to a specific project, process or product. Business analytics can also be used to evaluate an entire company. Business analytics are performed in order to identify weaknesses in existing processes and highlight meaningful data that will help an organization prepare for future growth and challenges.\r\nThe need for good business analytics has spurred the creation of business analytics software and enterprise platforms that mine an organization’s data in order to automate some of these measures and pick out meaningful insights.\r\nAlthough the term has become a bit of a buzzword, business analytics are a vital part of any business. Business analytics make up a large portion of decision support systems, continuous improvement programs and many of the other techniques used to keep a business competitive. Consequently, accurate business analytics like efficiency measures and capacity utilization rates are the first step to properly implementing these techniques.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/Business_Analytics.png"}],"characteristics":[],"concurentProducts":[],"jobRoles":[],"organizationalFeatures":[],"complementaryCategories":[],"solutions":[],"materials":[],"useCases":[],"best_practices":[],"values":[],"implementations":[]},{"id":4764,"logoURL":"https://old.roi4cio.com/fileadmin/user_upload/cisco_logo.png","logo":true,"scheme":false,"title":"Cisco Transport Manager (CTM)","vendorVerified":0,"rating":"0.00","implementationsCount":1,"suppliersCount":0,"supplierPartnersCount":125,"alias":"cisco-transport-manager-ctm","companyTitle":"Cisco","companyTypes":["supplier","vendor"],"companyId":170,"companyAlias":"cisco","description":"Cisco Transport Manager is an intelligent, multitechnology, carrier-class element management system (EMS) for optical networks designed following the TMF MTNM principles. Cisco Transport Manager simplifies provisioning and network management and reduces overall costs by providing operators with:<br />● Single system to manage optical networks: Increases productivity by simplifying complex provisioning tasks of optical network elements<br />● Single repository for network information: Supports configuration, fault, performance, and security management to capture network information such as resources, alarms, and performance data<br />● Integration with operations support system (OSS): Foundation for northbound EMS-to-network management system (NMS) interfaces, with gateway options for CORBA, compliant with TMF 814 standard, Simple Network Management Protocol (SNMP), and direct SQL database access<br />Features and Benefits<br />Cisco Transport Manager increases user productivity through a powerful GUI-based management system that simplifies complex provisioning tasks. The Cisco Transport Manager northbound interfaces accelerate integration into the operations support system’s customer environment. <br /><br />","shortDescription":"Enhance Network Security and Service Continuity with Cisco Transport Manager (CTM)","type":null,"isRoiCalculatorAvaliable":false,"isConfiguratorAvaliable":false,"bonus":100,"usingCount":7,"sellingCount":7,"discontinued":0,"rebateForPoc":0,"rebate":0,"seo":{"title":"Cisco Transport Manager (CTM)","keywords":"","description":"Cisco Transport Manager is an intelligent, multitechnology, carrier-class element management system (EMS) for optical networks designed following the TMF MTNM principles. Cisco Transport Manager simplifies provisioning and network management and reduces overal","og:title":"Cisco Transport Manager (CTM)","og:description":"Cisco Transport Manager is an intelligent, multitechnology, carrier-class element management system (EMS) for optical networks designed following the TMF MTNM principles. Cisco Transport Manager simplifies provisioning and network management and reduces overal","og:image":"https://old.roi4cio.com/fileadmin/user_upload/cisco_logo.png"},"eventUrl":"","translationId":4765,"dealDetails":null,"roi":null,"price":null,"bonusForReference":null,"templateData":[],"testingArea":"","categories":[{"id":57,"title":"Engineering Applications","alias":"engineering-applications","description":"Specific segmentations of <span style=\"font-weight: bold;\">Engineering Applications</span> include software packages, such as 2D CAD, 3D CAD, engineering analysis, project software and services, collaborative engineering software, and asset information management. These tools are used not only for asset creation but also to manage data and information throughout the lifecycle of physical assets in both infrastructure and industry. Application of optimization techniques in engineering provides as-built information to owners for operations and maintenance requirements, as well as a document for any modifications to the facility.<span style=\"font-weight: bold; \"></span>\r\n<span style=\"font-weight: bold; \">Computer-aided design (CAD)</span> is the use of computers (or workstations) to aid in the creation, modification, analysis, or optimization of a design. CAD software is used to increase the productivity of the designer, improve the quality of design, improve communications through documentation, and to create a database for manufacturing. Computer engineering and intelligent systems output is often in the form of electronic files for print, machining, or other manufacturing operations. \r\nIts use in designing electronic systems is known as electronic design automation (EDA). Application of CAD in mechanical engineering is known as mechanical design automation (MDA) or computer-aided drafting (CAD), which includes the process of creating a technical drawing with the use of computer software.\r\nCAD software for mechanical design uses either vector-based graphics to depict the objects of traditional drafting, or may also produce raster graphics showing the overall appearance of designed objects. However, it involves more than just shapes. As in the manual drafting of technical and engineering drawings, the output of CAD must convey information, such as materials, processes, dimensions, and tolerances, according to application-specific conventions.\r\nCAD is an important industrial art extensively used in many engineering applications, including automotive, shipbuilding, and aerospace industries, industrial and architectural design, electrical engineering app, prosthetics, environmental engineering applications, and many more. \r\nEngineering apps and software are: 2D layout and CAD software, 3D design and visualization systems, Pre-engineering and FEED applications, Engineering information management systems, Asset lifecycle information management systems, Asset performance management systems, P&ID and piping layout design, 3D laser scanning and point cloud modeling, 3D augmented reality simulation systems, 3D virtual reality simulation based on other technologies (photometry, etc.), 3D virtual simulation for operator training, Electrical Engineering applications and HVAC design, Engineering analysis tools, Civil engineering design packages, Fabrication and construction management systems, Software implementation services, Software maintenance & support services, Software as a service including deployment (Cloud, subscription, etc.), Collaborative software for engineering workflows, Associated databases and interfaces.","materialsDescription":"<h1 class=\"align-center\">2D and 3D CAD software</h1>\r\n<p class=\"align-left\">General-purpose CAD software includes a wide range of 2D and 3D software. Before delving into the more specific types of CAD software, it’s important to understand the difference between 2D and 3D CAD and the various industries that leverage them.</p>\r\n<p class=\"align-left\">2D CAD software offers a platform to design in two dimensions. Since 2D CAD does not allow for the creation of perspectives or scale, it is often used for drawing, sketching and drafting conceptual designs. 2D CAD is often used for floor plan development, building permit drawing and building inspection planning. Since it is mainly used as a tool for conceptual design, it is also a great starting point for most 3D designs. This gives users a basic overview of dimension and scale before they move on to 3D design. 2D CAD typically runs at a significantly lower price since it does not provide the same scale of tools and breadth of features.</p>\r\n<p class=\"align-left\">3D CAD provides a platform for designing 3D objects. The main feature of this type of CAD software is 3D solid modeling. This lets designers create objects with length, width and height, allowing more accurate scaling and visualization. With this feature, users can push and pull surfaces and manipulate designs to adjust measurements. Once the 3D design is to your liking, you can transfer it to a 3D rendering software and place the designs in fully realized 3D landscapes.</p>\r\n<h1 class=\"align-center\">BIM software</h1>\r\n<p class=\"align-left\">One of the more specific types of 3D CAD software is building information modeling software, also known as BIM software. BIM software is intended to aid in the design and construction of buildings specifically. BIM software provides users with the ability to break down building parts and see how they fit into a single finalized structure. Users can isolate walls, columns, windows, doors, etc., and alter the design. Engineers, architect, and manufacturers are just some of the professionals that use BIM software on a regular basis.</p>\r\n<h1 class=\"align-center\">Civil engineering design software</h1>\r\n<p class=\"align-left\">Civil engineering design software allows users to design 3D models of municipal buildings and structures. This includes tools for railway modeling, highway design and city infrastructure planning. Similar to BIM, civil engineering design software helps in every stage of the design process by breaking it down to drafting, designing and visualizing the final product. Best app for civil engineering also helps designers determine building costs. Civil engineering design software is perfect for engineers working in public and civil departments including transportation, structural and geotech.</p>\r\n<h1 class=\"align-center\">3D printing software</h1>\r\n<p class=\"align-left\">3D printing software facilitates the printing of real-life 3D objects. When users design an object, it can bу translated into a 3D printing software. The software then relays instructions on how to print that design to an actual 3D printer. The 3D printing software sends instructions to just print out certain parts of an object, or it can print out the entirety of an object. Some CAD software doubles as 3D printing software so you can seamlessly produce actual 3D objects all from one platform. 3D printing software can be used by manufacturers and architects to build machine or building parts. This greatly reduces production costs, as manufacturers no longer need offsite locations for manufacturing. It also gives companies a rapid test drive to see how a product would look if it were mass produced.</p>","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/Engineering_Applications.png"},{"id":71,"title":"CRM - Customer Relationship Management","alias":"crm-customer-relationship-management","description":"<span style=\"font-weight: bold;\">Customer service</span> is the provision of service to customers before, during and after a purchase. The perception of success of such interactions is dependent on employees "who can adjust themselves to the personality of the guest". Customer service concerns the priority an organization assigns to customer service relative to components such as product innovation and pricing. In this sense, an organization that values good customer service may spend more money in training employees than the average organization or may proactively interview customers for feedback.\r\nA <span style=\"font-weight: bold;\">customer support</span> is a range of customer services to assist customers in making cost effective and correct use of a product. It includes assistance in planning, installation, training, trouble shooting, maintenance, upgrading, and disposal of a product. These services even may be done at customer's side where he/she uses the product or service. In this case it is called "at home customer services" or "at home customer support."\r\nRegarding technology, products such as mobile phones, televisions, computers, software products or other electronic or mechanical goods, it is termed technical support. \r\nCustomer service may be provided by a person (e.g., sales and service representative), or by automated means, such as kiosks, Internet sites, and apps.\r\n<span style=\"font-weight: bold;\">CRM </span>(Customer Relationship Management) is an approach to manage a company's interaction with current and potential customers. It uses data analysis about customers' history with a company to improve business relationships with customers, specifically focusing on customer retention and ultimately driving sales growth.\r\nOne important aspect of the CRM approach is the systems of CRM that compile data from a range of different communication channels, including a company's website, telephone, email, live chat, marketing materials and more recently, social media. Through the CRM approach and the systems used to facilitate it, businesses learn more about their target audiences and how to best cater to their needs.\r\nCRM helps users focus on their organization’s relationships with individual people including customers, service users, colleagues, or suppliers.\r\nWhen people talk about customer relationship management system, they might mean any of three things: \r\n<ul><li><span style=\"font-weight: bold;\">CRM as Technology</span>: This is a technology product, often in the cloud, that teams use to record, report and analyse interactions between the company and users. This is also called a CRM system or solution.</li><li><span style=\"font-weight: bold;\">CRM as a Strategy</span>: This is a business’ philosophy about how relationships with customers and potential customers should be managed. </li><li><span style=\"font-weight: bold;\">CRM as a Process</span>: Think of this as a system a business adopts to nurture and manage those relationships.</li></ul>\r\n<br /><br /><br />","materialsDescription":"<h1 class=\"align-center\"><span style=\"font-weight: normal;\">Why is CRM important?</span></h1>\r\nCRM management system enables a business to deepen its relationships with customers, service users, colleagues, partners and suppliers.\r\nForging good relationships and keeping track of prospects and customers is crucial for customer acquisition and retention, which is at the heart of a CRM’s function. You can see everything in one place — a simple, customizable dashboard that can tell you a customer’s previous history with you, the status of their orders, any outstanding customer service issues, and more.\r\nGartner predicts that by 2021, CRM technology will be the single largest revenue area of spending in enterprise software. If your business is going to last, you know that you need a strategy for the future. For forward-thinking businesses, CRM is the framework for that strategy.\r\n<h1 class=\"align-center\"><span style=\"font-weight: normal;\">What are the benefits of CRM?</span></h1>\r\nBy collecting and organising data about customer interactions, making it accessible and actionable for all, and facilitating analysis of that data, CRM offers many benefits and advantages.<br />The benefits and advantages of CRM include:\r\n<ul><li>Enhanced contact management</li><li>Cross-team collaboration</li><li>Heightened productivity</li><li>Empowered sales management</li><li>Accurate sales forecasting</li><li>Reliable reporting</li><li>Improved sales metrics</li><li>Increased customer satisfaction and retention</li><li>Boosted marketing ROI</li><li>Enriched products and services</li></ul>\r\n<h1 class=\"align-center\"><span style=\"font-weight: normal;\">What are the key features of most popular CRM software programs?</span></h1>\r\nWhile many CRM solutions differ in their specific value propositions — depending on your business size, priority function, or industry type — they usually share some core features. These, in fact, are the foundation of any top CRM software, without which you might end up using an inferior app or an over-rated address book. So, let’s discuss the key features you need to look for when figuring out the best CRM software for your business.\r\n\r\n<ul><li><span style=\"font-weight: bold;\">Contact management</span>. The best CRM solutions aren’t just an address book that only organizes contact details. It manages customer data in a centralized place and gives you a 360-degree view of your customers. You should be able to organize customers’ personal information, demographics, interactions, and transactions in ways that are meaningful to your goals or processes. Moreover, a good contact management feature lets you personalize your outreach campaign. By collecting personal, social, and purchase data, it will help you to segment target audience groups in different ways.</li></ul>\r\n\r\n<ul><li><span style=\"font-weight: bold;\">Reporting and dashboards</span>. These features of customer relationship management allow you to use analytics to interpret customer data. Reporting is very useful if you want to consolidate disparate data and churn out insights in different visualizations. This lets you make better decisions or proactively deal with market trends and customer behavioral patterns. The more visual widgets a CRM software has, the better you can present reports. Furthermore, a best customer relationship management software will generate real-time data, making reporting more accurate and timely. Reporting also keeps you tab on sales opportunities like upsell, resell, and cross-sell, especially when integrated with e-commerce platforms.</li></ul>\r\n\r\n<ul><li><span style=\"font-weight: bold;\">Lead management</span>. These features let you manage leads all the way to win-loss stage. They pave a clear path to conversion, so you can quickly assess how the business is performing. One of the main three legs that comprises the best client relationship management software (the other two being contact management and reporting), lead management unburdens the sales team from follow-ups, tracking, and repetitive tasks.</li></ul>\r\n\r\n<ul><li><span style=\"font-weight: bold;\">Deals and tasks</span>. Deals and tasks are closely associated with leads. Deals are leads at the negotiation stage, so it’s critical to keep a close eye on their associated tasks for a higher chance of conversion.<br />CRM software tools should also let you track both deals and tasks in their respective windows or across the sales stages. Whether you’re viewing a contact or analyzing the sales pipeline, you should be able to immediately check the deal’s tasks and details. Deals and tasks should also have user permissions to protect leaks of sensitive data. Similarly, alerts are critical to tasks so deadlines are met. Notifications are usually sent via email or prominently displayed on the user’s dashboard.</li></ul>\r\n\r\n<ul><li><span style=\"font-weight: bold;\">Campaign management</span>. Solid CRM software will integrate this feature to enable marketing processes from outreach concept to A/B testing to deployment and to post analysis. This will allow you to sort campaigns to target segments in your contacts and define deployment strategies. You will also be able to define metrics for various channels, then plow back the insights generated by post-campaign analytics into planning more campaigns.<br />Recurring outreach efforts can also be automated. For instance, you can set to instantly appropriate content to contacts based on their interest or send tiered autoresponders based on campaign feedback.</li></ul>\r\n\r\n<ul><li><span style=\"font-weight: bold;\">Email management</span>. By integrating with popular email clients like Gmail and Outlook, CRM solutions can capture email messages and sort important details that can be saved in contacts or synced with leads. They can also track activities like opened emails, forwarded emails, clicked links, and downloaded files. Emails can also be qualified for prospecting.</li></ul>\r\n\r\n<ul><li><span style=\"font-weight: bold;\">Social media management. </span>Popular CRM systems feature an integrated social media management where you can view different social media pages from the CRM’s interface. This is a convenient way to post, reply on, and manage all your pages. Likewise, this feature gives you a better perspective on how customers are interacting with your brand. A glean of their likes and dislikes, interests, shares, and public conversations helps you to assess customer biases and preferences. Customers are also increasingly using social media to contact companies; hence, a good CRM should alert you for brand mentions.</li></ul>\r\n\r\n<ul><li><span style=\"font-weight: bold;\">Mobile access</span>. With more users accessing apps via mobile devices, many vendors have been prioritizing mobile-first platforms. Emergence Capital Partners study found over 300 mobile-first apps so far and CRM is definitely one their targets. Many CRM solutions have both Android and iOS apps. Mobile access works in two ways to be highly appreciated: accessing data and inputting data while on location. Field sales with the latest sales information on hand may be able to interest prospects better. Conversely, sales reps can quickly update deals across the pipeline even as they come off a client meeting.</li></ul>\r\n\r\n","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/CRM_-_Customer_Relationship_Management.png"}],"characteristics":[],"concurentProducts":[],"jobRoles":[],"organizationalFeatures":[],"complementaryCategories":[],"solutions":[],"materials":[],"useCases":[],"best_practices":[],"values":[],"implementations":[]},{"id":4766,"logoURL":"https://old.roi4cio.com/fileadmin/user_upload/Cisco_ONS_15454_Series.jpg","logo":true,"scheme":false,"title":"Cisco ONS 15454 Series","vendorVerified":0,"rating":"0.00","implementationsCount":1,"suppliersCount":0,"supplierPartnersCount":125,"alias":"cisco-ons-15454-series","companyTitle":"Cisco","companyTypes":["supplier","vendor"],"companyId":170,"companyAlias":"cisco","description":"For over a decade, service providers and enterprises alike have relied on Cisco for metro, regional, long haul, and data center optical transport networks. These networks transport huge quantities of data at high rates over great distances, providing the foundation for all WANs.\r\n<span style=\"font-weight: bold;\">Get fourth-generation innovation</span><br />\r\nCisco ROADM innovation continues into its fourth generation with the first Single Module ROADM. It combines multidegree wavelength switching functionality with optical amplification and spectrum analysis in a single slot line card.<br />\r\n<span style=\"font-weight: bold;\">Utilize new features</span><br />\r\nAlong with advanced features, the 15454 provides wavelength switched optical network functionality. This embeds optical layer intelligence directly into network elements to support wavelength-on-demand services and dynamic restoration.<br />\r\n<span style=\"font-weight: bold;\">Gain flexible aggregation</span><br />\r\nCisco optical transport aggregation solutions integrate packet, SONET, and OTN aggregation and switching into the DWDM transport platform. Customers will enjoy efficient wavelength fill and tight communication among network layers.<br />\r\n<span style=\"font-weight: bold;\">Streamline operations</span><br />\r\nSelected on a per card basis, a mix of Layer 1 services, time division multiplexing (TDM), and packet switching technologies can be deployed where needed. Meet customer and network requirements while simplifying operations. <br />\r\n<span style=\"font-weight: bold;\">Scale to 100 Gb and beyond</span><br />\r\nCisco leads the optical transport industry as it moves toward coherent technology for DWDM transport of 100 Gb services. Powered by nLight Silicon, Cisco coherent technology will scale to even greater densities and higher bit rates.","shortDescription":"Cisco ONS 15454 Series Multiservice Transport Platforms","type":null,"isRoiCalculatorAvaliable":false,"isConfiguratorAvaliable":false,"bonus":100,"usingCount":7,"sellingCount":10,"discontinued":0,"rebateForPoc":0,"rebate":0,"seo":{"title":"Cisco ONS 15454 Series","keywords":"","description":"For over a decade, service providers and enterprises alike have relied on Cisco for metro, regional, long haul, and data center optical transport networks. These networks transport huge quantities of data at high rates over great distances, providing the found","og:title":"Cisco ONS 15454 Series","og:description":"For over a decade, service providers and enterprises alike have relied on Cisco for metro, regional, long haul, and data center optical transport networks. These networks transport huge quantities of data at high rates over great distances, providing the found","og:image":"https://old.roi4cio.com/fileadmin/user_upload/Cisco_ONS_15454_Series.jpg"},"eventUrl":"","translationId":4767,"dealDetails":null,"roi":null,"price":null,"bonusForReference":null,"templateData":[],"testingArea":"","categories":[{"id":852,"title":"Network security","alias":"network-security","description":" Network security consists of the policies and practices adopted to prevent and monitor unauthorized access, misuse, modification, or denial of a computer network and network-accessible resources. Network security involves the authorization of access to data in a network, which is controlled by the network administrator. Users choose or are assigned an ID and password or other authenticating information that allows them access to information and programs within their authority. Network security covers a variety of computer networks, both public and private, that are used in everyday jobs; conducting transactions and communications among businesses, government agencies and individuals. Networks can be private, such as within a company, and others which might be open to public access. Network security is involved in organizations, enterprises, and other types of institutions. It does as its title explains: it secures the network, as well as protecting and overseeing operations being done. The most common and simple way of protecting a network resource is by assigning it a unique name and a corresponding password.\r\nNetwork security starts with authentication, commonly with a username and a password. Since this requires just one detail authenticating the user name — i.e., the password—this is sometimes termed one-factor authentication. With two-factor authentication, something the user 'has' is also used (e.g., a security token or 'dongle', an ATM card, or a mobile phone); and with three-factor authentication, something the user 'is' is also used (e.g., a fingerprint or retinal scan).\r\nOnce authenticated, a firewall enforces access policies such as what services are allowed to be accessed by the network users. Though effective to prevent unauthorized access, this component may fail to check potentially harmful content such as computer worms or Trojans being transmitted over the network. Anti-virus software or an intrusion prevention system (IPS) help detect and inhibit the action of such malware. An anomaly-based intrusion detection system may also monitor the network like wireshark traffic and may be logged for audit purposes and for later high-level analysis. Newer systems combining unsupervised machine learning with full network traffic analysis can detect active network attackers from malicious insiders or targeted external attackers that have compromised a user machine or account.\r\nCommunication between two hosts using a network may be encrypted to maintain privacy.\r\nHoneypots, essentially decoy network-accessible resources, may be deployed in a network as surveillance and early-warning tools, as the honeypots are not normally accessed for legitimate purposes. Techniques used by the attackers that attempt to compromise these decoy resources are studied during and after an attack to keep an eye on new exploitation techniques. Such analysis may be used to further tighten security of the actual network being protected by the honeypot. A honeypot can also direct an attacker's attention away from legitimate servers. A honeypot encourages attackers to spend their time and energy on the decoy server while distracting their attention from the data on the real server. Similar to a honeypot, a honeynet is a network set up with intentional vulnerabilities. Its purpose is also to invite attacks so that the attacker's methods can be studied and that information can be used to increase network security. A honeynet typically contains one or more honeypots.","materialsDescription":" <span style=\"font-weight: bold;\">What is Network Security?</span>\r\nNetwork security is any action an organization takes to prevent malicious use or accidental damage to the network’s private data, its users, or their devices. The goal of network security is to keep the network running and safe for all legitimate users.\r\nBecause there are so many ways that a network can be vulnerable, network security involves a broad range of practices. These include:\r\n<ul><li><span style=\"font-weight: bold;\">Deploying active devices:</span> Using software to block malicious programs from entering, or running within, the network. Blocking users from sending or receiving suspicious-looking emails. Blocking unauthorized use of the network. Also, stopping the network's users accessing websites that are known to be dangerous.</li><li><span style=\"font-weight: bold;\">Deploying passive devices:</span> For instance, using devices and software that report unauthorized intrusions into the network, or suspicious activity by authorized users.</li><li><span style=\"font-weight: bold;\">Using preventative devices:</span> Devices that help identify potential security holes, so that network staff can fix them.</li><li><span style=\"font-weight: bold;\">Ensuring users follow safe practices:</span> Even if the software and hardware are set up to be secure, the actions of users can create security holes. Network security staff is responsible for educating members of the organization about how they can stay safe from potential threats.</li></ul>\r\n<span style=\"font-weight: bold;\">Why is Network Security Important?</span>\r\nUnless it’s properly secured, any network is vulnerable to malicious use and accidental damage. Hackers, disgruntled employees, or poor security practices within the organization can leave private data exposed, including trade secrets and customers’ private details.\r\nLosing confidential research, for example, can potentially cost an organization millions of dollars by taking away competitive advantages it paid to gain. While hackers stealing customers’ details and selling them to be used in fraud, it creates negative publicity and public mistrust of the organization.\r\nThe majority of common attacks against networks are designed to gain access to information, by spying on the communications and data of users, rather than to damage the network itself.\r\nBut attackers can do more than steal data. They may be able to damage users’ devices or manipulate systems to gain physical access to facilities. This leaves the organization’s property and members at risk of harm.\r\nCompetent network security procedures keep data secure and block vulnerable systems from outside interference. This allows the network’s users to remain safe and focus on achieving the organization’s goals.\r\n<span style=\"font-weight: bold;\">Why Do I Need Formal Education to Run a Computer Network?</span>\r\nEven the initial setup of security systems can be difficult for those unfamiliar with the field. A comprehensive security system is made of many pieces, each of which needs specialized knowledge.\r\nBeyond setup, each aspect of security is constantly evolving. New technology creates new opportunities for accidental security leaks, while hackers take advantage of holes in security to do damage as soon as they find them. Whoever is in charge of the network’s security needs to be able to understand the technical news and changes as they happen, so they can implement safety strategies right away.\r\nProperly securing your network using the latest information on vulnerabilities helps minimize the risk that attacks will succeed. Security Week reported that 44% of breaches in 2014 came from exploits that were 2-4 years old.\r\nUnfortunately, many of the technical aspects of network security are beyond those who make hiring decisions. So, the best way an organization can be sure that their network security personnel are able to properly manage the threats is to hire staff with the appropriate qualifications.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Network_security.png"},{"id":548,"title":"Web security - Appliance","alias":"web-security-appliance","description":"A security appliance is any form of server appliance that is designed to protect computer networks from unwanted traffic. Types of network security appliance:\r\n<span style=\"font-weight: bold;\">Active devices</span> block unwanted traffic. Examples of such devices are firewalls, anti-virus scanning devices, and content filtering devices. For instance, if you want to make sure that you do not get pointless spam and other unnecessary issues, installing an active device might be a great idea. Active devices include anti-virus scanning devices, which will automatically scan throughout the network to ensure that no virus exists within the protected network. Then, there are web filtering appliances as well as firewalls, the purpose of both of which is to ensure that only useful content and traffic flows through the network and all pointless or harmful data is filtered.\r\n<span style=\"font-weight: bold;\">Passive devices detect and report on unwanted traffic.</span> A common example is intrusion detection appliances, which are installed in order to determine whether the network has been compromised in any way. These devices usually work in the background at all times.\r\n<span style=\"font-weight: bold;\">Preventative devices</span> scan networks and identify potential security problems (such as penetration testing and vulnerability assessment appliances). These devices are usually designed to 'prevent' damage to the network by identifying problems in advance. Common examples include devices that employ penetration testing as well as those devices which carry out vulnerability assessment on networks.\r\n<span style=\"font-weight: bold;\">Unified Threat Management (UTM)</span> combines features together into one system, such as some firewalls, content filtering, web caching etc. UTM devices are designed to provide users with a one-stop solution to all of their network needs and internet security appliances. As the name clearly suggests, these devices provide the features of all of the other network devices and condense them into one. These devices are designed to provide a number of different network security options in one package, hence providing networks with a simple solution. Rather than installing four different devices, users can easily install one and be done with it. The market of UTM devices has exceeded the billion dollar mark already, which just goes to show how popular these devices have become amongst network users.\r\nOne of the most popular and accessible types of web security appliance tools is the hardware <span style=\"font-weight: bold;\">keylogger.</span> This device is placed covertly between the case and keyboard with an output for the computer case and input for the keyboard. As hardware standards have changed over time, a USB hardware keylogger provides access on many devices.\r\nThe <span style=\"font-weight: bold;\">web proxy appliance</span> is basically hardware you use to manage user web access. More to the point, it's the type of device that handles the blocking or controlling of suspicious programs. It's typically placed in between network users and the worldwide web; ergo, it's most popular application is serving as a central control hub over employee Internet use by corporations and enterprises. It's the in-between gateway that serves as a termination point of sorts for online communications within a network and is capable of applying a multitude of rule-based limitations on Internet traffic, web content, and requests before they even end up with end users.\r\nAnother commonly used hardware tool is the <span style=\"font-weight: bold;\">wireless antenna.</span> These can be used to surveil a wide variety of wireless communications, including local cellular and internet service networks. More mechanical and general devices may include lockpicks or portable probes and hijack chips for compromising electronic devices through the physical circuit.\r\n<span style=\"font-weight: bold;\">Secure web gateway appliances</span> are solutions to prevent advanced threats, block unauthorized access to systems or websites, stop malware, and monitor real-time activity across websites accessed by users within the institution. Software and cloud-based platforms now perform this function as well.","materialsDescription":"<h1 class=\"align-center\"> What are the top Network Security Appliance brands?</h1>\r\n<span style=\"font-weight: bold;\">Blue Coat Systems,</span> Sunnyvale, Calif.-based Blue Coat has been part of security powerhouse Symantec since 2016.\r\n<span style=\"font-weight: bold;\">F5 Networks,</span> the Seattle-based network application delivery vendor, sold about $17.6 million in network security appliances through the channel in the second quarter, NPD said.\r\n<span style=\"font-weight: bold;\">SonicWall.</span>Firewall power player SonicWall sold about $23.5 million in network security appliances through the channel in the second quarter, according to NPD.\r\n<span style=\"font-weight: bold;\">Fortinet,</span> Sunnyvale, Calif., security software vendor Fortinet sold about $24.4 million in network security appliances through the channel in the second quarter, NPD said.\r\n<span style=\"font-weight: bold;\">Cisco Systems,</span> Cisco Systems was the quarter's growth champion, posting $77.2 million in network security appliance sales through the channel in the period, beating the previous year’s quarterly total of $62.3 million by about 24 percent, according to NPD.\r\n<span style=\"font-weight: bold;\">Palo Alto Networks.</span> With $94.2 million in network security appliance sales in the quarter, Palo Alto Networks was the best-selling network security appliance brand of the second quarter, according to NPD.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Web_security_Appliance.png"},{"id":513,"title":"Networking","alias":"networking","description":" Networking hardware, also known as network equipment or computer networking devices, are electronic devices which are required for communication and interaction between devices on a computer network. Specifically, they mediate data transmission in a computer network. Units which are the last receiver or generate data are called hosts or data terminal equipment.\r\nNetworking devices may include gateways, routers, network bridges, modems, wireless access points, networking cables, line drivers, switches, hubs, and repeaters; and may also include hybrid network devices such as multilayer switches, protocol converters, bridge routers, proxy servers, firewalls, network address translators, multiplexers, network interface controllers, wireless network interface controllers, ISDN terminal adapters and other related hardware.\r\nThe most common kind of networking hardware today is a copper-based Ethernet adapter which is a standard inclusion on most modern computer systems. Wireless networking has become increasingly popular, especially for portable and handheld devices.\r\nOther networking hardware used in computers includes data center equipment (such as file servers, database servers and storage areas), network services (such as DNS, DHCP, email, etc.) as well as devices which assure content delivery.\r\nTaking a wider view, mobile phones, tablet computers and devices associated with the internet of things may also be considered networking hardware. As technology advances and IP-based networks are integrated into building infrastructure and household utilities, network hardware will become an ambiguous term owing to the vastly increasing number of network capable endpoints.","materialsDescription":" <span style=\"font-weight: bold;\">What is network equipment?</span>\r\nNetwork equipment - devices necessary for the operation of a computer network, for example: a router, switch, hub, patch panel, etc. You can distinguish between active and passive network equipment.\r\n<span style=\"font-weight: bold;\">What is an active network equipment?</span>\r\nActive networking equipment is equipment followed by some “smart” feature. That is, a router, switch (switch), etc. are active network equipment.\r\n<span style=\"font-weight: bold;\">What is passive network equipment?</span>\r\nPassive network equipment - equipment not endowed with "intellectual" features. For example - cable system: cable (coaxial and twisted pair (UTP/STP)), plug / socket (RG58, RJ45, RJ11, GG45), repeater (repeater), patch panel, hub (hub), balun (balun) for coaxial cables (RG-58), etc. Also, passive equipment can include mounting cabinets and racks, telecommunication cabinets.\r\n<span style=\"font-weight: bold;\">What are the main network components?</span>\r\nThe main components of the network are workstations, servers, transmission media (cables) and network equipment.\r\n<span style=\"font-weight: bold;\">What are workstations?</span>\r\nWorkstations are network computers where network users implement application tasks.\r\n<span style=\"font-weight: bold;\">What are network servers?</span>\r\nNetwork servers - hardware and software systems that perform the functions of controlling the distribution of network shared resources. A server can be any computer connected to the network on which the resources used by other devices on the local network are located. As the server hardware, fairly powerful computers are used.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Networking.png"},{"id":172,"title":"WLAN - wireless network","alias":"wlan-wireless-network","description":"Unified Communications (UC) is a marketing buzzword describing the integration of real-time, enterprise, communication services such as instant messaging (chat), presence information, voice (including IP telephony), mobility features (including extension mobility and single number reach), audio, web & video conferencing, fixed-mobile convergence (FMC), desktop sharing, data sharing (including web connected electronic interactive whiteboards), call control and speech recognition with non-real-time communication services such as unified messaging (integrated voicemail, e-mail, SMS and fax). UC is not necessarily a single product, but a set of products that provides a consistent unified user-interface and user-experience across multiple devices and media-types.\r\n\r\nIn its broadest sense, UC can encompass all forms of communications that are exchanged via a network to include other forms of communications such as Internet Protocol Television (IPTV) and digital signage Communications as they become an integrated part of the network communications deployment and may be directed as one-to-one communications or broadcast communications from one to many.\r\n\r\nUC allows an individual to send a message on one medium, and receive the same communication on another medium. For example, one can receive a voicemail message and choose to access it through e-mail or a cell phone. If the sender is online according to the presence information and currently accepts calls, the response can be sent immediately through text chat or video call. Otherwise, it may be sent as a non-real-time message that can be accessed through a variety of media.\r\n\r\nSource: https://en.wikipedia.org/wiki/Unified_communications","materialsDescription":"","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/WLAN_-_wireless_network.png"},{"id":475,"title":"Network Management - Hardware","alias":"network-management-hardware","description":" Your business is much more than just a machine that dispenses products or services in exchange for money. It’s akin to a living and breathing thing. Just as with the human body, in business, all the parts are interconnected and work together to move things forward.\r\nIf a company’s management is the brain, then its employees are the muscles. Muscles don’t work without the oxygen carried to them by the blood. Blood doesn’t pump through the body without the heart and circulatory system.\r\nData moves through your network like blood through veins, delivering vital information to employees who need it to do their jobs. In a business sense, the digital network is the heart and circulatory system. Without a properly functioning network, the entire business collapses. That’s why keeping networks healthy is vitally important. Just as keeping the heart healthy is critical to living a healthy life, a healthy network is a key to a thriving business. It starts with network management.\r\nNetwork management is hardware with a broad range of functions including activities, methods, procedures and the use of tools to administrate, operate, and reliably maintain computer network systems.\r\nStrictly speaking, network Management does not include terminal equipment (PCs, workstations, printers, etc.). Rather, it concerns the reliability, efficiency and capacity/capabilities of data transfer channels.","materialsDescription":" <span style=\"font-weight: bold;\">What Is Network Management?</span>\r\nNetwork management refers to the processes, tools, and applications used to administer, operate and maintain network infrastructure. Performance management and fault analysis also fall into the category of network management. To put it simply, network management is the process of keeping your network healthy, which keeps your business healthy.\r\n<span style=\"font-weight: bold;\">What Are the Components of Network Management?</span>\r\nThe definition of network management is often broad, as network management involves several different components. Here are some of the terms you’ll often hear when network management or network management software is talked about:\r\n<ul><li>Network administration</li><li>Network maintenance</li><li>Network operation</li><li>Network provisioning</li><li>Network security</li></ul>\r\n<span style=\"font-weight: bold;\">Why Is Network Management so Important When It Comes to Network Infrastructure?</span>\r\nThe whole point of network management is to keep the network infrastructure running smoothly and efficiently. Network management helps you:\r\n<ul><li><span style=\"font-style: italic;\">Avoid costly network disruptions.</span> Network downtime can be very costly. In fact, industry research shows the cost can be up to $5,600 per minute or more than $300K per hour. Network disruptions take more than just a financial toll. They also have a negative impact on customer relationships. Slow and unresponsive corporate networks make it harder for employees to serve customers. And customers who feel underserved could be quick to leave.</li><li><span style=\"font-style: italic;\">Improve IT productivity.</span> By monitoring every aspect of the network, an effective network management system does many jobs at once. This frees up IT staff to focus on other things.</li><li><span style=\"font-style: italic;\">Improve network security.</span> With a focus on network management, it’s easy to identify and respond to threats before they propagate and impact end-users. Network management also aims to ensure regulatory and compliance requirements are met.</li><li><span style=\"font-style: italic;\">Gain a holistic view of network performance.</span> Network management gives you a complete view of how your network is performing. It enables you to identify issues and fix them quickly.</li></ul>\r\n<span style=\"font-weight: bold;\">What Are the Challenges of Maintaining Effective Network Management and Network Infrastructure?</span>\r\nNetwork infrastructures can be complex. Because of that complexity, maintaining effective network management is difficult. Advances in technology and the cloud have increased user expectations for faster network speeds and network availability. On top of that, security threats are becoming ever more advanced, varied and numerous. And if you have a large network, it incorporates several devices, systems, and tools that all need to work together seamlessly. As your network scales and your company grows, new potential points of failure are introduced. Increased costs also come into play.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Network_Management_Hardware__1_.png"}],"characteristics":[],"concurentProducts":[],"jobRoles":[],"organizationalFeatures":[],"complementaryCategories":[],"solutions":[],"materials":[],"useCases":[],"best_practices":[],"values":[],"implementations":[]},{"id":5025,"logoURL":"https://old.roi4cio.com/fileadmin/user_upload/dell_emc_product.jpg","logo":true,"scheme":false,"title":"Dell EMC Networker","vendorVerified":0,"rating":"0.00","implementationsCount":1,"suppliersCount":0,"supplierPartnersCount":59,"alias":"dell-emc-networker","companyTitle":"Dell EMC","companyTypes":["vendor"],"companyId":955,"companyAlias":"dell-emc","description":"Whether your organization is a small office or a large data center, leverages on-premises resources or applications in the cloud, NetWorker provides a common user experience to protect your data. \r\n<span style=\"font-weight: bold;\">Centralized Backup and Recovery</span><br />\r\nNetWorker delivers centralized backup and recovery operations for complete control of data protection across diverse computing and storage environments.\r\n<ul><li>Virtual and physical environments</li></ul>\r\n<ul><li>Critical business applications</li></ul>\r\n<ul><li>Storage area networks (SANs), network-attached storage (NAS), and direct-attached storage (DAS).</li></ul>\r\n<ul><li>Backup storage options including, tape drives and libraries, virtual tape libraries, disk arrays, deduplication storage systems, and object storage in the cloud.</li></ul>\r\n<span style=\"font-weight: bold;\">Performance and Security</span><br />\r\nNetWorker delivers enterprise-class performance and security to meet even the most demanding service level requirements.<br />Integration with advanced technologies such as array-based snapshots (both block and file) and the VMware vStorage APIs for Data Protection provides fast, efficient, and non-disruptive backup.<br />\r\nSuperb performance includes:\r\n<ul><li>Deploy a vProxy in less than 5 minutes</li></ul>\r\n<ul><li>Protect thousands of virtual machines with a single server</li></ul>\r\n<ul><li>Protect thousands of virtual machines through a single vCenter</li></ul>\r\n<ul><li>Support for 256-bit AES encryption</li></ul>\r\n<ul><li>Secure lockbox control</li></ul>\r\n<ul><li>Enhanced user authentication</li></ul>\r\n<ul><li>Role based authorization </li></ul>\r\n<span style=\"font-weight: bold;\"><br /></span>\r\n<span style=\"font-weight: bold;\">BENEFITS</span><br />\r\nCENTRALIZED MANAGEMENT\r\n<ul><li>Simplifies and automates backup and recovery operations</li></ul>\r\n<ul><li>Integration with DPC offers centralized alerting, reporting and search</li></ul>\r\n<ul><li>Management of Data Domain from within the NetWorker UI</li></ul>\r\nDATA DOMAIN INTEGRATION\r\n<ul><li>Enables long-term retention of backups to the cloud with Data Domain Cloud Tier</li></ul>\r\n<ul><li>Instant access and recovery of VMware Image backups</li></ul>\r\n<ul><li>Reduce infrastructure utilization and cost</li></ul>\r\nCLOUD BACKUP AND RECOVERY\r\n<ul><li>Cost effective backup to object storage in the cloud</li></ul>\r\n<ul><li>Simplified deployment in native cloud formats</li></ul>\r\n<ul><li>Flexible backup and recovery for Azure Stack </li></ul>","shortDescription":"Dell EMC Networker unified data backup and recovery software includes a range of data protection options across physical and virtual environments.","type":null,"isRoiCalculatorAvaliable":false,"isConfiguratorAvaliable":false,"bonus":100,"usingCount":13,"sellingCount":7,"discontinued":0,"rebateForPoc":0,"rebate":0,"seo":{"title":"Dell EMC Networker","keywords":"","description":"Whether your organization is a small office or a large data center, leverages on-premises resources or applications in the cloud, NetWorker provides a common user experience to protect your data. \r\n<span style=\"font-weight: bold;\">Centralized Backup and Recove","og:title":"Dell EMC Networker","og:description":"Whether your organization is a small office or a large data center, leverages on-premises resources or applications in the cloud, NetWorker provides a common user experience to protect your data. \r\n<span style=\"font-weight: bold;\">Centralized Backup and Recove","og:image":"https://old.roi4cio.com/fileadmin/user_upload/dell_emc_product.jpg"},"eventUrl":"","translationId":5026,"dealDetails":null,"roi":null,"price":null,"bonusForReference":null,"templateData":[],"testingArea":"","categories":[{"id":46,"title":"Data Protection and Recovery Software","alias":"data-protection-and-recovery-software","description":"Data protection and recovery software provide data backup, integrity and security for data backups and it enables timely, reliable and secure backup of data from a host device to destination device. Recently, Data Protection and Recovery Software market are disrupted by innovative technologies such as server virtualization, disk-based backup, and cloud services where emerging players are playing an important role. Tier one players such as IBM, Hewlett Packard Enterprise, EMC Corporation, Symantec Corporation and Microsoft Corporation are also moving towards these technologies through partnerships and acquisitions.\r\nThe major factor driving data protection and recovery software market is the high adoption of cloud-based services and technologies. Many organizations are moving towards the cloud to reduce their operational expenses and to provide real-time access to their employees. However, increased usage of the cloud has increased the risk of data loss and data theft and unauthorized access to confidential information, which increases the demand for data protection and recovery solution suites.","materialsDescription":" \r\n<span style=\"font-weight: bold; \">What is Data recovery?</span>\r\nData recovery is a process of salvaging (retrieving) inaccessible, lost, corrupted, damaged or formatted data from secondary storage, removable media or files, when the data stored in them cannot be accessed in a normal way. The data is most often salvaged from storage media such as internal or external hard disk drives (HDDs), solid-state drives (SSDs), USB flash drives, magnetic tapes, CDs, DVDs, RAID subsystems, and other electronic devices. Recovery may be required due to physical damage to the storage devices or logical damage to the file system that prevents it from being mounted by the host operating system (OS).\r\nThe most common data recovery scenario involves an operating system failure, malfunction of a storage device, logical failure of storage devices, accidental damage or deletion, etc. (typically, on a single-drive, single-partition, single-OS system), in which case the ultimate goal is simply to copy all important files from the damaged media to another new drive. This can be easily accomplished using a Live CD or DVD by booting directly from a ROM instead of the corrupted drive in question. Many Live CDs or DVDs provide a means to mount the system drive and backup drives or removable media, and to move the files from the system drive to the backup media with a file manager or optical disc authoring software. Such cases can often be mitigated by disk partitioning and consistently storing valuable data files (or copies of them) on a different partition from the replaceable OS system files.\r\nAnother scenario involves a drive-level failure, such as a compromised file system or drive partition, or a hard disk drive failure. In any of these cases, the data is not easily read from the media devices. Depending on the situation, solutions involve repairing the logical file system, partition table or master boot record, or updating the firmware or drive recovery techniques ranging from software-based recovery of corrupted data, hardware- and software-based recovery of damaged service areas (also known as the hard disk drive's "firmware"), to hardware replacement on a physically damaged drive which allows for extraction of data to a new drive. If a drive recovery is necessary, the drive itself has typically failed permanently, and the focus is rather on a one-time recovery, salvaging whatever data can be read.\r\nIn a third scenario, files have been accidentally "deleted" from a storage medium by the users. Typically, the contents of deleted files are not removed immediately from the physical drive; instead, references to them in the directory structure are removed, and thereafter space the deleted data occupy is made available for later data overwriting. In the mind of end users, deleted files cannot be discoverable through a standard file manager, but the deleted data still technically exists on the physical drive. In the meantime, the original file contents remain, often in a number of disconnected fragments, and may be recoverable if not overwritten by other data files.\r\nThe term "data recovery" is also used in the context of forensic applications or espionage, where data which have been encrypted or hidden, rather than damaged, are recovered. Sometimes data present in the computer gets encrypted or hidden due to reasons like virus attack which can only be recovered by some computer forensic experts.\r\n<span style=\"font-weight: bold;\">What is a backup?</span>\r\nA backup, or data backup, or the process of backing up, refers to the copying into an archive file of computer data that is already in secondary storage—so that it may be used to restore the original after a data loss event. The verb form is "back up" (a phrasal verb), whereas the noun and adjective form is "backup".\r\nBackups have two distinct purposes. The primary purpose is to recover data after its loss, be it by data deletion or corruption. Data loss can be a common experience of computer users; a 2008 survey found that 66% of respondents had lost files on their home PC. The secondary purpose of backups is to recover data from an earlier time, according to a user-defined data retention policy, typically configured within a backup application for how long copies of data are required. Though backups represent a simple form of disaster recovery and should be part of any disaster recovery plan, backups by themselves should not be considered a complete disaster recovery plan. One reason for this is that not all backup systems are able to reconstitute a computer system or other complex configuration such as a computer cluster, active directory server, or database server by simply restoring data from a backup.\r\nSince a backup system contains at least one copy of all data considered worth saving, the data storage requirements can be significant. Organizing this storage space and managing the backup process can be a complicated undertaking. A data repository model may be used to provide structure to the storage. Nowadays, there are many different types of data storage devices that are useful for making backups. There are also many different ways in which these devices can be arranged to provide geographic redundancy, data security, and portability.\r\nBefore data are sent to their storage locations, they are selected, extracted, and manipulated. Many different techniques have been developed to optimize the backup procedure. These include optimizations for dealing with open files and live data sources as well as compression, encryption, and de-duplication, among others. Every backup scheme should include dry runs that validate the reliability of the data being backed up. It is important to recognize the limitations and human factors involved in any backup scheme.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/Data_Protection_and_Recovery_Software__1_.png"}],"characteristics":[],"concurentProducts":[],"jobRoles":[],"organizationalFeatures":[],"complementaryCategories":[],"solutions":[],"materials":[],"useCases":[],"best_practices":[],"values":[],"implementations":[]},{"id":4770,"logoURL":"https://old.roi4cio.com/fileadmin/user_upload/Dell_PowerEdge_MX7000_Modular_Chassis.jpg","logo":true,"scheme":false,"title":"Dell PowerEdge MX7000 Modular Chassis","vendorVerified":0,"rating":"0.00","implementationsCount":1,"suppliersCount":0,"supplierPartnersCount":15,"alias":"dell-poweredge-mx7000-modular-chassis","companyTitle":"DELL","companyTypes":["vendor"],"companyId":169,"companyAlias":"dell","description":"<span style=\"font-weight: bold; \">Liberate IT resources to achieve optimal utilization, productivity and efficiency</span>\r\nAs dynamic and innovative as your business, PowerEdge MX kinetic infrastructure bridges traditional and software-defined data centers with unequal flexibility and agility. At the foundation, PowerEdge MX7000 chassis hosts disaggregated blocks of server and storage to create consumable resources on-demand. Shared power, cooling, networking, I/O and in-chassis management provides outstanding efficiencies.\r\n<ul><li>7U modular enclosure with eight slots holds 2S single or four 4S double-width compute sleds and 12Gbs single-width storage sleds</li><li>25Gb Ethernet, 12Gb SAS and 32Gb Fibre Channel I/O options</li><li>Three I/O networking fabrics, two general purpose and one storage specific, each with redundant modules</li><li>Multi-chassis networking up to 10 chassis</li><li>Single management point for compute, storage and networking</li><li>High-speed technology connections, now and into the future, with no midplane upgrade</li><li>At least three server processor microarchitecture generation support assurance</li></ul>\r\n<span style=\"font-weight: bold; \">Dynamically scale and respond with kinetic infrastructure</span>\r\nDesigned with Dell EMC’s kinetic infrastructure, PowerEdge MX creates shared pools of disaggregated compute and storage resources, connected by scalable fabric, from which workloads can draw resources needed to run most quickly and efficiently. Then when no longer needed the resources are returned into the pool. By essentially creating hardware on the fly the capacity can be managed at a data center level instead of a per server level.\r\n<ul><li>Full-featured, no compromise compute sleds with Intel® Xeon® Scalable processors</li><li>Generous, scalable on-board SAS, SATA, and NVMe storage drives, plus substantial, granular SAS direct-attached storage using optional storage sleds</li><li>Scalable fabric architecture with a grow-as-you-need fabric expansion capability for up to 10 chassis in fabric.</li></ul>\r\n<span style=\"font-weight: bold; \">Increase effectiveness and accelerate operations with unified automation</span>\r\nEmbedded Dell EMC OpenManage Enterprise – Modular Edition delivers the key abilities of OpenMange Enterprise systems management within the PowerEdge MX chassis. A unified simple interface manages compute, storage and fabric, reducing costs and the learning curve and consolidates multiple tools. Redundant management modules ensure highest availability.\r\n<ul><li>Automatic expansion from one to multiple chassis; scale management to thousands of PowerEdge MX and rack servers with OpenManage Enterprise</li><li>Flexible, at-the-box management front control panel options include Quick Sync 2 (wireless), touchscreen LCD and traditional crash cart</li><li>Comprehensive RESTful API helps automate multiple tasks and integrates to third-party tools</li><li>Seamlessly integrates with integrated Dell Remote Access Controller 9 (iDRAC9) and Lifecycle Controller (LC)</li></ul>\r\n<span style=\"font-weight: bold;\">Protect infrastructure and investment with responsive design</span>\r\nReduce the risk of infrastructure investment and help make new innovations more easily available with PowerEdge MX7000 future-forward architecture. Designed to maximize longevity and minimize disruptive technology changes support across both generational and architectural transitions is provided.\r\n<ul><li>Multi-generational assurance with support for at least three server processor microarchitecture generations</li><li>Nearly zero throughput limitations, providing high-speed technology connections, and well into the future, with no midplane upgrade</li><li>Industry-leading thermal architecture and mechanical design and control algorithms support dense configurations and future compatibility</li></ul>","shortDescription":"Dynamically assign, move and scale shared pools of compute, storage and fabric, with greater flexibility and efficiency, and deliver optimal value.","type":null,"isRoiCalculatorAvaliable":false,"isConfiguratorAvaliable":false,"bonus":100,"usingCount":11,"sellingCount":1,"discontinued":0,"rebateForPoc":0,"rebate":0,"seo":{"title":"Dell PowerEdge MX7000 Modular Chassis","keywords":"","description":"<span style=\"font-weight: bold; \">Liberate IT resources to achieve optimal utilization, productivity and efficiency</span>\r\nAs dynamic and innovative as your business, PowerEdge MX kinetic infrastructure bridges traditional and software-defined data centers wi","og:title":"Dell PowerEdge MX7000 Modular Chassis","og:description":"<span style=\"font-weight: bold; \">Liberate IT resources to achieve optimal utilization, productivity and efficiency</span>\r\nAs dynamic and innovative as your business, PowerEdge MX kinetic infrastructure bridges traditional and software-defined data centers wi","og:image":"https://old.roi4cio.com/fileadmin/user_upload/Dell_PowerEdge_MX7000_Modular_Chassis.jpg"},"eventUrl":"","translationId":4771,"dealDetails":null,"roi":null,"price":null,"bonusForReference":null,"templateData":[],"testingArea":"","categories":[{"id":4,"title":"Data center","alias":"data-center","description":" A data center (or datacenter) is a facility composed of networked computers and storage that businesses or other organizations use to organize, process, store and disseminate large amounts of data. A business typically relies heavily upon the applications, services and data contained within a data center, making it a focal point and critical asset for everyday operations.\r\nData centers are not a single thing, but rather, a conglomeration of elements. At a minimum, data centers serve as the principal repositories for all manner of IT equipment, including servers, storage subsystems, networking switches, routers and firewalls, as well as the cabling and physical racks used to organize and interconnect the IT equipment. A data center must also contain an adequate infrastructure, such as power distribution and supplemental power subsystems, including electrical switching; uninterruptable power supplies; backup generators and so on; ventilation and data center cooling systems, such as computer room air conditioners; and adequate provisioning for network carrier (telco) connectivity. All of this demands a physical facility with physical security and sufficient physical space to house the entire collection of infrastructure and equipment.","materialsDescription":" <span style=\"font-weight: bold;\">What are the requirements for modern data centers?</span>\r\nModernization and data center transformation enhances performance and energy efficiency.\r\nInformation security is also a concern, and for this reason a data center has to offer a secure environment which minimizes the chances of a security breach. A data center must therefore keep high standards for assuring the integrity and functionality of its hosted computer environment.\r\nIndustry research company International Data Corporation (IDC) puts the average age of a data center at nine years old. Gartner, another research company, says data centers older than seven years are obsolete. The growth in data (163 zettabytes by 2025) is one factor driving the need for data centers to modernize.\r\nFocus on modernization is not new: Concern about obsolete equipment was decried in 2007, and in 2011 Uptime Institute was concerned about the age of the equipment therein. By 2018 concern had shifted once again, this time to the age of the staff: "data center staff are aging faster than the equipment."\r\n<span style=\"font-style: italic;\"><span style=\"font-weight: bold;\">Meeting standards for data centers</span></span>\r\nThe Telecommunications Industry Association's Telecommunications Infrastructure Standard for Data Centers specifies the minimum requirements for telecommunications infrastructure of data centers and computer rooms including single tenant enterprise data centers and multi-tenant Internet hosting data centers. The topology proposed in this document is intended to be applicable to any size data center.\r\nTelcordia GR-3160, NEBS Requirements for Telecommunications Data Center Equipment and Spaces, provides guidelines for data center spaces within telecommunications networks, and environmental requirements for the equipment intended for installation in those spaces. These criteria were developed jointly by Telcordia and industry representatives. They may be applied to data center spaces housing data processing or Information Technology (IT) equipment. The equipment may be used to:\r\n<ul><li>Operate and manage a carrier's telecommunication network</li><li>Provide data center based applications directly to the carrier's customers</li><li>Provide hosted applications for a third party to provide services to their customers</li><li>Provide a combination of these and similar data center applications</li></ul>\r\n<span style=\"font-style: italic;\"><span style=\"font-weight: bold;\">Data center transformation</span></span>\r\nData center transformation takes a step-by-step approach through integrated projects carried out over time. This differs from a traditional method of data center upgrades that takes a serial and siloed approach. The typical projects within a data center transformation initiative include standardization/consolidation, virtualization, automation and security.\r\n<ul><li>Standardization/consolidation: Reducing the number of data centers and avoiding server sprawl (both physical and virtual) often includes replacing aging data center equipment, and is aided by standardization.</li><li>Virtualization: Lowers capital and operational expenses, reduce energy consumption. Virtualized desktops can be hosted in data centers and rented out on a subscription basis. Investment bank Lazard Capital Markets estimated in 2008 that 48 percent of enterprise operations will be virtualized by 2012. Gartner views virtualization as a catalyst for modernization.</li><li>Automating: Automating tasks such as provisioning, configuration, patching, release management and compliance is needed, not just when facing fewer skilled IT workers.</li><li>Securing: Protection of virtual systems is integrated with existing security of physical infrastructures.</li></ul>\r\n<span style=\"font-style: italic;\"><span style=\"font-weight: bold;\">Machine room</span></span>\r\nThe term "Machine Room" is at times used to refer to the large room within a Data Center where the actual Central Processing Unit is located; this may be separate from where high-speed printers are located. Air conditioning is most important in the machine room.\r\nAside from air-conditioning, there must be monitoring equipment, one type of which is to detect water prior to flood-level situations. One company, for several decades, has had share-of-mind: Water Alert. The company, as of 2018, has 2 competing manufacturers (Invetex, Hydro-Temp) and 3 competing distributors (Longden,Northeast Flooring, Slayton). ","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Data_center.png"},{"id":471,"title":"Hardware","alias":"hardware","description":" Computer hardware includes the physical, tangible parts or components of a computer, such as the cabinet, central processing unit, monitor, keyboard, computer data storage, graphics card, sound card, speakers and motherboard. By contrast, software is instructions that can be stored and run by hardware. Hardware is so-termed because it is "hard" or rigid with respect to changes or modifications; whereas software is "soft" because it is easy to update or change. Intermediate between software and hardware is "firmware", which is software that is strongly coupled to the particular hardware of a computer system and thus the most difficult to change but also among the most stable with respect to consistency of interface. The progression from levels of "hardness" to "softness" in computer systems parallels a progression of layers of abstraction in computing.\r\nHardware is typically directed by the software to execute any command or instruction. A combination of hardware and software forms a usable computing system, although other systems exist with only hardware components.\r\nThe template for all modern computers is the Von Neumann architecture, detailed in a 1945 paper by Hungarian mathematician John von Neumann. This describes a design architecture for an electronic digital computer with subdivisions of a processing unit consisting of an arithmetic logic unit and processor registers, a control unit containing an instruction register and program counter, a memory to store both data and instructions, external mass storage, and input and output mechanisms. The meaning of the term has evolved to mean a stored-program computer in which an instruction fetch and a data operation cannot occur at the same time because they share a common bus. This is referred to as the Von Neumann bottleneck and often limits the performance of the system.","materialsDescription":" <span style=\"font-weight: bold; \">What does Hardware (H/W) mean?</span>\r\nHardware (H/W), in the context of technology, refers to the physical elements that make up a computer or electronic system and everything else involved that is physically tangible. This includes the monitor, hard drive, memory and CPU. Hardware works hand-in-hand with firmware and software to make a computer function.\r\n<span style=\"font-weight: bold; \">What are the types of computer systems?</span>\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">Personal computer</span></span>\r\nThe personal computer, also known as the PC, is one of the most common types of computer due to its versatility and relatively low price. Laptops are generally very similar, although they may use lower-power or reduced size components, thus lower performance.\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">Case</span></span>\r\nThe computer case encloses and holds most of the components of the system. It provides mechanical support and protection for internal elements such as the motherboard, disk drives, and power supplies, and controls and directs the flow of cooling air over internal components. The case is also part of the system to control electromagnetic interference radiated by the computer, and protects internal parts from electrostatic discharge. Large tower cases provide extra internal space for multiple disk drives or other peripherals and usually stand on the floor, while desktop cases provide less expansion room. All-in-one style designs include a video display built into the same case. Portable and laptop computers require cases that provide impact protection for the unit. A current development in laptop computers is a detachable keyboard, which allows the system to be configured as a touch-screen tablet. Hobbyists may decorate the cases with colored lights, paint, or other features, in an activity called case modding.\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">Power supply</span></span>\r\nA power supply unit (PSU) converts alternating current (AC) electric power to low-voltage direct current (DC) power for the internal components of the computer. Laptops are capable of running from a built-in battery, normally for a period of hours. The PSU typically uses a switched-mode power supply (SMPS), with power MOSFETs (power metal–oxide–semiconductor field-effect transistors) used in the converters and regulator circuits of the SMPS.\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">Motherboard</span></span>\r\nThe motherboard is the main component of a computer. It is a board with integrated circuitry that connects the other parts of the computer including the CPU, the RAM, the disk drives (CD, DVD, hard disk, or any others) as well as any peripherals connected via the ports or the expansion slots. The integrated circuit (IC) chips in a computer typically contain billions of tiny metal–oxide–semiconductor field-effect transistors (MOSFETs).\r\nComponents directly attached to or to part of the motherboard include:\r\n<ul><li><span style=\"font-weight: bold; \">The CPU (central processing unit)</span>, which performs most of the calculations which enable a computer to function, and is referred to as the brain of the computer which get a hold of program instruction from random-access memory (RAM), interprets and processes it and then send it backs to computer result so that the relevant components can carry out the instructions. The CPU is a microprocessor, which is fabricated on a metal–oxide–semiconductor (MOS) integrated circuit (IC) chip. It is usually cooled by a heat sink and fan, or water-cooling system. Most newer CPU include an on-die graphics processing unit (GPU). The clock speed of CPU governs how fast it executes instructions, and is measured in GHz; typical values lie between 1 GHz and 5 GHz. Many modern computers have the option to overclock the CPU which enhances performance at the expense of greater thermal output and thus a need for improved cooling.</li><li><span style=\"font-weight: bold; \">The chipset</span>, which includes the north bridge, mediates communication between the CPU and the other components of the system, including main memory; as well as south bridge, which is connected to the north bridge, and supports auxiliary interfaces and buses; and, finally, a Super I/O chip, connected through the south bridge, which supports the slowest and most legacy components like serial ports, hardware monitoring and fan control.</li><li><span style=\"font-weight: bold; \">Random-access memory (RAM)</span>, which stores the code and data that are being actively accessed by the CPU. For example, when a web browser is opened on the computer it takes up memory; this is stored in the RAM until the web browser is closed. It is typically a type of dynamic RAM (DRAM), such as synchronous DRAM (SDRAM), where MOS memory chips store data on memory cells consisting of MOSFETs and MOS capacitors. RAM usually comes on dual in-line memory modules (DIMMs) in the sizes of 2GB, 4GB, and 8GB, but can be much larger.</li><li><span style=\"font-weight: bold; \">Read-only memory (ROM)</span>, which stores the BIOS that runs when the computer is powered on or otherwise begins execution, a process known as Bootstrapping, or "booting" or "booting up". The ROM is typically a nonvolatile BIOS memory chip, which stores data on floating-gate MOSFET memory cells.</li><li><span style=\"font-weight: bold; \">The BIOS (Basic Input Output System)</span> includes boot firmware and power management firmware. Newer motherboards use Unified Extensible Firmware Interface (UEFI) instead of BIOS.</li><li><span style=\"font-weight: bold; \">Buses</span> that connect the CPU to various internal components and to expand cards for graphics and sound.</li><li><span style=\"font-weight: bold; \">The CMOS</span> (complementary MOS) battery, which powers the CMOS memory for date and time in the BIOS chip. This battery is generally a watch battery.</li><li><span style=\"font-weight: bold; \">The video card</span> (also known as the graphics card), which processes computer graphics. More powerful graphics cards are better suited to handle strenuous tasks, such as playing intensive video games or running computer graphics software. A video card contains a graphics processing unit (GPU) and video memory (typically a type of SDRAM), both fabricated on MOS integrated circuit (MOS IC) chips.</li><li><span style=\"font-weight: bold; \">Power MOSFETs</span> make up the voltage regulator module (VRM), which controls how much voltage other hardware components receive.</li></ul>\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">Expansion cards</span></span>\r\nAn expansion card in computing is a printed circuit board that can be inserted into an expansion slot of a computer motherboard or backplane to add functionality to a computer system via the expansion bus. Expansion cards can be used to obtain or expand on features not offered by the motherboard.\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">Storage devices</span></span>\r\nA storage device is any computing hardware and digital media that is used for storing, porting and extracting data files and objects. It can hold and store information both temporarily and permanently, and can be internal or external to a computer, server or any similar computing device. Data storage is a core function and fundamental component of computers.\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">Fixed media</span></span>\r\nData is stored by a computer using a variety of media. Hard disk drives (HDDs) are found in virtually all older computers, due to their high capacity and low cost, but solid-state drives (SSDs) are faster and more power efficient, although currently more expensive than hard drives in terms of dollar per gigabyte, so are often found in personal computers built post-2007. SSDs use flash memory, which stores data on MOS memory chips consisting of floating-gate MOSFET memory cells. Some systems may use a disk array controller for greater performance or reliability.\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">Removable media</span></span>\r\nTo transfer data between computers, an external flash memory device (such as a memory card or USB flash drive) or optical disc (such as a CD-ROM, DVD-ROM or BD-ROM) may be used. Their usefulness depends on being readable by other systems; the majority of machines have an optical disk drive (ODD), and virtually all have at least one Universal Serial Bus (USB) port.\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">Input and output peripherals</span></span>\r\nInput and output devices are typically housed externally to the main computer chassis. The following are either standard or very common to many computer systems.\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">Input</span></span>\r\nInput devices allow the user to enter information into the system, or control its operation. Most personal computers have a mouse and keyboard, but laptop systems typically use a touchpad instead of a mouse. Other input devices include webcams, microphones, joysticks, and image scanners.\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">Output device</span></span>\r\nOutput devices display information in a human readable form. Such devices could include printers, speakers, monitors or a Braille embosser.\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">Mainframe computer</span></span>\r\nA mainframe computer is a much larger computer that typically fills a room and may cost many hundreds or thousands of times as much as a personal computer. They are designed to perform large numbers of calculations for governments and large enterprises.\r\n<span style=\"font-style: italic; \"><span style=\"font-weight: bold; \">Departmental computing</span></span>\r\nIn the 1960s and 1970s, more and more departments started to use cheaper and dedicated systems for specific purposes like process control and laboratory automation.\r\n<span style=\"font-style: italic;\"><span style=\"font-weight: bold;\">Supercomputer</span></span>\r\nA supercomputer is superficially similar to a mainframe, but is instead intended for extremely demanding computational tasks. As of June 2018, the fastest supercomputer on the TOP500supercomputer list is the Summit, in the United States, with a LINPACK benchmarkscore of 122.3 PFLOPS Light, by around 29 PFLOPS.\r\nThe term supercomputer does not refer to a specific technology. Rather it indicates the fastest computations available at any given time. In mid 2011, the fastest supercomputers boasted speeds exceeding one petaflop, or 1 quadrillion (10^15 or 1,000 trillion) floating point operations per second. Supercomputers are fast but extremely costly, so they are generally used by large organizations to execute computationally demanding tasks involving large data sets. Supercomputers typically run military and scientific applications. Although costly, they are also being used for commercial applications where huge amounts of data must be analyzed. For example, large banks employ supercomputers to calculate the risks and returns of various investment strategies, and healthcare organizations use them to analyze giant databases of patient data to determine optimal treatments for various diseases and problems incurring to the country. ","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Hardware.jpg"},{"id":513,"title":"Networking","alias":"networking","description":" Networking hardware, also known as network equipment or computer networking devices, are electronic devices which are required for communication and interaction between devices on a computer network. Specifically, they mediate data transmission in a computer network. Units which are the last receiver or generate data are called hosts or data terminal equipment.\r\nNetworking devices may include gateways, routers, network bridges, modems, wireless access points, networking cables, line drivers, switches, hubs, and repeaters; and may also include hybrid network devices such as multilayer switches, protocol converters, bridge routers, proxy servers, firewalls, network address translators, multiplexers, network interface controllers, wireless network interface controllers, ISDN terminal adapters and other related hardware.\r\nThe most common kind of networking hardware today is a copper-based Ethernet adapter which is a standard inclusion on most modern computer systems. Wireless networking has become increasingly popular, especially for portable and handheld devices.\r\nOther networking hardware used in computers includes data center equipment (such as file servers, database servers and storage areas), network services (such as DNS, DHCP, email, etc.) as well as devices which assure content delivery.\r\nTaking a wider view, mobile phones, tablet computers and devices associated with the internet of things may also be considered networking hardware. As technology advances and IP-based networks are integrated into building infrastructure and household utilities, network hardware will become an ambiguous term owing to the vastly increasing number of network capable endpoints.","materialsDescription":" <span style=\"font-weight: bold;\">What is network equipment?</span>\r\nNetwork equipment - devices necessary for the operation of a computer network, for example: a router, switch, hub, patch panel, etc. You can distinguish between active and passive network equipment.\r\n<span style=\"font-weight: bold;\">What is an active network equipment?</span>\r\nActive networking equipment is equipment followed by some “smart” feature. That is, a router, switch (switch), etc. are active network equipment.\r\n<span style=\"font-weight: bold;\">What is passive network equipment?</span>\r\nPassive network equipment - equipment not endowed with "intellectual" features. For example - cable system: cable (coaxial and twisted pair (UTP/STP)), plug / socket (RG58, RJ45, RJ11, GG45), repeater (repeater), patch panel, hub (hub), balun (balun) for coaxial cables (RG-58), etc. Also, passive equipment can include mounting cabinets and racks, telecommunication cabinets.\r\n<span style=\"font-weight: bold;\">What are the main network components?</span>\r\nThe main components of the network are workstations, servers, transmission media (cables) and network equipment.\r\n<span style=\"font-weight: bold;\">What are workstations?</span>\r\nWorkstations are network computers where network users implement application tasks.\r\n<span style=\"font-weight: bold;\">What are network servers?</span>\r\nNetwork servers - hardware and software systems that perform the functions of controlling the distribution of network shared resources. A server can be any computer connected to the network on which the resources used by other devices on the local network are located. As the server hardware, fairly powerful computers are used.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Networking.png"},{"id":35,"title":"Server","alias":"server","description":"In computing, a server is a computer program or a device that provides functionality for other programs or devices, called "clients". This architecture is called the client–server model, and a single overall computation is distributed across multiple processes or devices. Servers can provide various functionalities, often called "services", such as sharing data or resources among multiple clients, or performing computation for a client. A single server can serve multiple clients, and a single client can use multiple servers. A client process may run on the same device or may connect over a network to a server on a different device. Typical servers are database servers, file servers, mail servers, print servers, web servers, game servers, and application servers.\r\nClient–server systems are today most frequently implemented by (and often identified with) the request–response model: a client sends a request to the server, which performs some action and sends a response back to the client, typically with a result or acknowledgement. Designating a computer as "server-class hardware" implies that it is specialized for running servers on it. This often implies that it is more powerful and reliable than standard personal computers, but alternatively, large computing clusters may be composed of many relatively simple, replaceable server components.\r\nStrictly speaking, the term server refers to a computer program or process (running program). Through metonymy, it refers to a device used for (or a device dedicated to) running one or several server programs. On a network, such a device is called a host. In addition to server, the words serve and service (as noun and as verb) are frequently used, though servicer and servant are not. The word service (noun) may refer to either the abstract form of functionality, e.g. Web service. Alternatively, it may refer to a computer program that turns a computer into a server, e.g. Windows service. Originally used as "servers serve users" (and "users use servers"), in the sense of "obey", today one often says that "servers serve data", in the same sense as "give". For instance, web servers "serve web pages to users" or "service their requests".\r\nThe server is part of the client–server model; in this model, a server serves data for clients. The nature of communication between a client and server is request and response. This is in contrast with peer-to-peer model in which the relationship is on-demand reciprocation. In principle, any computerized process that can be used or called by another process (particularly remotely, particularly to share a resource) is a server, and the calling process or processes is a client. Thus any general purpose computer connected to a network can host servers. For example, if files on a device are shared by some process, that process is a file server. Similarly, web server software can run on any capable computer, and so a laptop or a personal computer can host a web server.\r\nWhile request–response is the most common client–server design, there are others, such as the publish–subscribe pattern. In the publish–subscribe pattern, clients register with a pub–sub server, subscribing to specified types of messages; this initial registration may be done by request–response. Thereafter, the pub–sub server forwards matching messages to the clients without any further requests: the server pushes messages to the client, rather than the client pulling messages from the server as in request–response.","materialsDescription":" <span style=\"font-weight: bold;\">What is a server?</span>\r\nA server is a software or hardware device that accepts and responds to requests made over a network. The device that makes the request, and receives a response from the server, is called a client. On the Internet, the term "server" commonly refers to the computer system which receives a request for a web document and sends the requested information to the client.\r\n<span style=\"font-weight: bold;\">What are they used for?</span>\r\nServers are used to manage network resources. For example, a user may set up a server to control access to a network, send/receive an e-mail, manage print jobs, or host a website. They are also proficient at performing intense calculations. Some servers are committed to a specific task, often referred to as dedicated. However, many servers today are shared servers which can take on the responsibility of e-mail, DNS, FTP, and even multiple websites in the case of a web server.\r\n<span style=\"font-weight: bold;\">Why are servers always on?</span>\r\nBecause they are commonly used to deliver services that are constantly required, most servers are never turned off. Consequently, when servers fail, they can cause the network users and company many problems. To alleviate these issues, servers are commonly set up to be fault-tolerant.\r\n<span style=\"font-weight: bold;\">What are the examples of servers?</span>\r\nThe following list contains links to various server types:\r\n<ul><li>Application server;</li><li>Blade server;</li><li>Cloud server;</li><li>Database server;</li><li>Dedicated server;</li><li>Domain name service;</li><li>File server;</li><li>Mail server;</li><li>Print server;</li><li>Proxy server;</li><li>Standalone server;</li><li>Web server.</li></ul>\r\n<span style=\"font-weight: bold;\">How do other computers connect to a server?</span>\r\nWith a local network, the server connects to a router or switch that all other computers on the network use. Once connected to the network, other computers can access that server and its features. For example, with a web server, a user could connect to the server to view a website, search, and communicate with other users on the network.\r\nAn Internet server works the same way as a local network server, but on a much larger scale. The server is assigned an IP address by InterNIC, or by a web host.\r\nUsually, users connect to a server using its domain name, which is registered with a domain name registrar. When users connect to the domain name (such as "computerhope.com"), the name is automatically translated to the server's IP address by a DNS resolver.\r\nThe domain name makes it easier for users to connect to the server because the name is easier to remember than an IP address. Also, domain names enable the server operator to change the IP address of the server without disrupting the way that users access the server. The domain name can always remain the same, even if the IP address changes.\r\n<span style=\"font-weight: bold;\">Where are servers stored?</span>\r\nIn a business or corporate environment, a server and other network equipment are often stored in a closet or glasshouse. These areas help isolate sensitive computers and equipment from people who should not have access to them.\r\nServers that are remote or not hosted on-site are located in a data center. With these types of servers, the hardware is managed by another company and configured remotely by you or your company.","iconURL":"https://old.roi4cio.com/fileadmin/user_upload/icon_Server.png"}],"characteristics":[],"concurentProducts":[],"jobRoles":[],"organizationalFeatures":[],"complementaryCategories":[],"solutions":[],"materials":[],"useCases":[],"best_practices":[],"values":[],"implementations":[]}],"partnershipProgramme":null}},"aliases":{},"links":{},"meta":{},"loading":false,"error":null},"implementations":{"implementationsByAlias":{},"aliases":{},"links":{},"meta":{},"loading":false,"error":null},"agreements":{"agreementById":{},"ids":{},"links":{},"meta":{},"loading":false,"error":null},"comparison":{"loading":false,"error":false,"templatesById":{},"comparisonByTemplateId":{},"products":[],"selectedTemplateId":null},"presentation":{"type":null,"company":{},"products":[],"partners":[],"formData":{},"dataLoading":false,"dataError":false,"loading":false,"error":false},"catalogsGlobal":{"subMenuItemTitle":""}}