The International Organization for Standardization (ISO) developed the Open Systems Interconnection (OSI) Reference Model to describe how information is transferred from one machine to another.
Micro segmentation is a term used with switches when each networking device has its own dedicated port on a switch.
A cooperative trade association responsible for the "Commercial Building Telecommunication Cabling Standard," also known as EIA/TIA 568, which specifies how network cables should be installed in a commercial site.
ARP, or Address Resolution Protocol can be likened to DNS for MAC Addresses. Standard DNS allows for the mapping of human-friendly URLs to IP addresses, while ARP allows for the mapping of IP addresses to MAC addresses. In this way it lets systems go from a regular domain name down to the actual piece of hardware it resides upon.
Logon scripts are, surprisingly enough, scripts that run at logon time. These are used most times to allow for the continued access to share and device mapping as well as forcing updates and configuration changes. In this way, it allows for one-step modifications if servers get changed, shares get renamed, or printers get switched out for example.
The three basic ways to authenticate someone are: something they know (password), something they have (token), and something they are (biometrics). Two-factor authentication is a combination of two of these methods, oftentimes using a password and token setup, although in some cases this can be a PIN and thumbprint.
The Encrypted File System, Microsoft's built-in file encryption utility has been around for quite some time. Files that have been encrypted in such a way can appear in Windows Explorer with a green tint as opposed to the black of normal files or blue for NTFS compressed files. Files that have been encrypted are tied to the specific user, and it can be difficult to decrypt the file without the user's assistance. On top of this, if the user loses their password it can become impossible to decrypt the files as the decryption process is tied to the user's login and password. EFS can only occur on NTFS formatted partitions, and while it is capable of encrypting entire drives it is most often reserved to individual files and folders. For larger purposes, Bitlocker is a better alternative.
A type of signal interference caused by signals transmitted on one pair of wires bleeding over into the other pairs. Crosstalk can cause network signals to degrade, eventually rendering them unviable.
Boot to LAN is most often used when you are doing a fresh install on a system. What you would do is setup a network-based installer capable of network-booting via PXE. Boot to LAN enables this by allowing a pre-boot environment to look for a DHCP server and connect to the broadcasting network installation server. Environments that have very large numbers of systems more often than not have the capability of pushing out images via the network. This reduces the amount of hands-on time that is required on each system, and keeps the installs more consistent.
Sticky ports are one of the network admin's best friends and worst headaches. They allow you to set up your network so that each port on a switch only permits one (or a number that you specify) computer to connect on that port by locking it to a particular MAC address. If any other computer plugs into that port, the port shuts down and you receive a call that they can't connect anymore. If you were the one that originally ran all the network connections then this isn't a big issue, and likewise if it is a predictable pattern then it also isn't an issue. However if you're working in a hand-me-down network where chaos is the norm then you might end up spending a while toning out exactly what they are connecting to.
RDP or Remote Desktop Protocol is the primary method by which Windows Systems can be remotely accessed for troubleshooting and is a software-driven method. KVM or Keyboard Video and Mouse on the other hand allows for the fast-switching between many different systems, but using the same keyboard, monitor and mouse for all. KVM is usually a hardware-driven system, with a junction box placed between the user and the systems in question- but there are some options that are enhanced by software. KVM also doesn't require an active network connection, so it can be very useful for using the same setup on multiple networks without having cross-talk.
While we're on the subject of Apple, Appletalk is a protocol developed by Apple to handle networking with little to no configuration (you may be sensing a pattern here). It reached its peak in the late 80s and early 90s, but there are still some devices that utilize this protocol. Most of its core technology has been moved over to Bonjour, while UPnP (Universal Plug and Play) has picked up on its ideology and moved the concept forward across many different hardware and software packages.
Routing is a process of finding a path to transfer data from source to destination.
The ability to remote into servers without having to actually be there is one of the most convenient methods of troubleshooting or running normal functions on a server- Terminal Services allow this capability for admins, but also another key function for standard users: the ability to run standard applications without having to have them installed on their local computers. In this way, all user profiles and applications can be maintained from a single location without having to worry about patch management and hardware failure on multiple systems.
Being able to ping out to a server and see if its responding is a great way to troubleshoot connectivity issues. But what if you're not able to ping ANY server? Does that mean that your entire network is down? Does it mean that your network cable needs to be replaced? Does it mean that your network card is going bad? Or could it possibly be that sunspots, magnets, aliens and the Men In Black are all conspiring against you? The answers to these questions could be very difficult, but at the very least you can rule out if your network card is going bad. 127.0.0.1 is the loopback connection on your network interface card (NIC)- pinging this address will see if it is responding. If the ping is successful, then the hardware is good. If it isn't, then you might have some maintenance in your future. 127.0.0.1 and localhost mean the same thing as far as most functions are concerned, however be careful when using them in situations like web programming as browsers can treat them very differently.
HTTPS or Secure HTTP (Not to be confused with SHTTP, which is an unrelated protocol), is HTTP's big brother. Designed to be able to be used for identity verification, HTTPS uses SSL certificates to be able to verify that the server you are connecting to is the one that it says it is. While there is some encryption capability of HTTPS, it is usually deemed not enough and further encryption methods are desired whenever possible. HTTPS traffic goes over TCP port 443.
VoIP is far better than traditional telephony but it has some drawbacks as listed below:
☛ Some VoIP services don't work during power outages and the service provider may not offer backup power.
☛ Not all VoIP services connect directly to emergency services through 9-1-1.
☛ VoIP providers may or may not offer directory assistance/white page listings.
A Firewall put simply keeps stuff from here talking to stuff over there. Firewalls exist in many different possible configurations with both hardware and software options as well as network and host varieties. Most of the general user base had their first introduction to Firewalls when Windows XP SP2 came along with Windows Firewall installed. This came with a lot of headaches, but to Microsoft's credit it did a lot of good things. Over the years it has improved a great deal and while there are still many options that go above and beyond what it does, what Windows Firewall accomplishes it does very well. Enhanced server-grade versions have been released as well, and have a great deal of customization available to the admin.
☛ A domain local group is a security or distribution group that can contain universal groups, global groups, other domain local groups from its own domain, and accounts from any domain in the forest. You can give domain local security groups rights and permissions on resources that reside only in the same domain where the domain local group is located.
☛ A global group is a group that can be used in its own domain, in member servers and in workstations of the domain, and in trusting domains. In all those locations, you can give a global group rights and permissions and the global group can become a member of local groups. However, a global group can contain user accounts that are only from its own domain.
☛ A universal group is a security or distribution group that contains users, groups, and computers from any domain in its forest as members. You can give universal security groups rights and permissions on resources in any domain in the forest. Universal groups are not supported.
SSH or Secure Shell is most well known by Linux users, but has a great deal that it can be used for. SSH is designed to create a secure tunnel between devices, whether that be systems, switches, thermostats, toasters, etc. SSH also has a unique ability to tunnel other programs through it, similar in concept to a VPN so even insecure programs or programs running across unsecure connections can be used in a secure state if configured correctly. SSH runs over TCP port 22.
Error 5 is very common when dealing with files and directories that have very specific permissions. When trying to copy elements from areas that have restricted permissions, or when trying to copy files to an area that has restricted permissions, you may get this error which basically means "Access denied". Checking out permissions, making sure that you have the appropriate permissions to both the source and destination locations, and making yourself the owner of those files can help to resolve this issue. Just remember that if you are not intended to be able to view these files to return the permissions back to normal once you are finished.
At a very basic level, there really isn't one. As you progress up the chain however, you start to realize that there actually are a lot of differences in the power available to users (and admins) depending on how much you know about the different interfaces. Each of these utilities is a CLI- Command Line Interface- that allows for direct access to some of the most powerful utilities and settings in their respective operating systems. Command Prompt (cmd) is a Windows utility based very heavily on DOS commands, but has been updated over the years with different options such as long filename support. Bash (short for Bourne-Again Shell) on the other hand is the primary means of managing Unix/Linux operating systems and has a great deal more power than many of its GUI counterparts. Any Windows user that is used to cmd will recognize some of the commands due to the fact that DOS was heavily inspired by Unix and thus many commands have versions that exist in Bash. That being said, they may not be the best ones to use; for example while list contents (dir) exists in Bash, the recommended method would be to use list (ls) as it allows for much easier-to-understand formatting. Powershell, a newer Windows Utility, can be considered a hybrid of these two systems- allowing for the legacy tools of the command prompt with some of the much more powerful scripting functions of Bash.
Similar to how a DNS server caches the addresses of accessed websites, a proxy server caches the contents of those websites and handles the heavy lifting of access and retrieval for users. Proxy servers can also maintain a list of blacklisted and whitelisted websites so as to prevent users from getting easily preventable infections. Depending on the intentions of the company, Proxy servers can also be used for monitoring web activity by users to make sure that sensitive information is not leaving the building. Proxy servers also exist as Web Proxy servers, allowing users to either not reveal their true access point to websites they are accessing and/or getting around region blocking.
SNMP is the "Simple Network Management Protocol". Most systems and devices on a network are able to tell when they are having issues and present them to the user through either prompts or displays directly on the device. For administrators unfortunately, it can be difficult to tell when there is a problem unless the user calls them over. On devices that have SNMP enabled however, this information can be broadcast and picked up by programs that know what to look for. In this way, reports can be run based on the current status of the network, find out what patches are current not installed, if a printer is jammed, etc. In large networks this is a requirement, but in any size network it can serve as a resource to see how the network is fairing and give a baseline of what its current health is.
The simple answer is that Multimode is cheaper but can't transmit as far. Single Mode has a smaller core (the part that handles light) than Multimode, but is better at keeping the light intact. This allows it to travel greater distances and at higher bandwidths than Multimode. The problem is that the requirements for Single Mode are very specific and as a result it usually is more expensive than Multimode. Therefore for applications, you will usually see Multimode in the datacenter with Single Mode for long-haul connections.
A workgroup is a collection of systems each with their own rules and local user logins tied to that particular system. A Domain is a collection of systems with a centralized authentication server that tells them what the rules are. While workgroups work effectively in small numbers, once you pass a relatively low threshold (usually anything more than say 5 systems), it becomes increasingly difficult to manage permissions and sharing effectively. To put this another way, a workgroup is very similar to a P2P network- each member is its own island and chooses what it decides to share with the rest of the network. Domains on the other hand are much more like a standard client/server relationship- the individual members of the domain connect to a central server which handles the heavy lifting and standardization of sharing and access permissions.
ICMP is the Internet Control Message Protocol. Most users will recognize the name through the use of tools such as ping and traceroute, as this is the protocol that these services run over among other things. Its primary purpose is to tell systems when they are trying to connect remotely if the other end is available. Like TCP and UDP, it is a part of the IP suite and uses IP port number 1. Please note, this is not TCP port 1 or UDP port 1 as this is a different numbering scheme that for reference can be located here (For your reference, TCP uses IP port 6, while UDP uses IP port 17). That being said, different functions of ICMP use specific ports on TCP and UDP. For example, the 'echo' portion of ping (the part where someone else is able to ping you) uses TCP port 7.
☛ RJ-11 ( Registered Jack-11) a four- or six-wire connector primarily used to connect telephone equipment.
☛ RJ-45 (Registered Jack-45) connector is an eight-wire connector that is commonly used to connect computers to a local area network (LAN), particularly Ethernet LANs.
☛ AUI( Attachment Unit Interface.) is the part of the Ethernet standard that specifies how a Thicknet cable is to be connected to an Ethernet card. AUI specifies a coaxial cable connected to a transceiver that plugs into a 15-pin socket on the network interface card (NIC).
☛ BNC stand for British Naval Connector (or Bayonet Nut Connector or Bayonet Neill Concelman)a type of connector used with coaxial cables such as RG-58.BNC connectors are used on both Thicknet and Thinnet.
In VoIP, phone conversations are converted to packets that flit all over the Internet or private networks, just like e-mails or Web pages, though voice packets get priority status. The packets get reassembled and converted to sound on the other end of the call but in traditional phone service, a phone conversation is converted into electronic signals that traverse an elaborate network of switches, in a dedicated circuit that lasts the duration of a call.
The main purpose of data link layer is to check that whether messages are sent to the right devices. Another function of data link layer is framing.
☛ Use a minimum password length of 12 to 14 characters if permitted.
☛ Include lowercase and uppercase alphabetic characters, numbers and symbols if permitted.
☛ Generate passwords randomly where feasible.
☛ Avoid using the same password twice (eg. across multiple user accounts and/or software systems).
☛ Avoid character repetition, keyboard patterns, dictionary words, letter or number sequences, usernames, relative or pet names, romantic links (current or past) and biographical information (e.g. ID numbers, ancestors' names or dates).
☛ Avoid using information that is or might become publicly associated with the user or the account.
☛ Avoid using information that the user's colleagues and/or acquaintances might know to be associated with the user.
☛ Do not use passwords which consist wholly of any simple combination of the aforementioned weak components.
Virtual Machines have only recently come into mainstream use, however they have been around under many different names for a long time. With the massive growth of hardware outstripping software requirements, it is now possible to have a server lying dormant 90% of the time while having other older systems at max capacity. Virtualizing those systems would allow the older operating systems to be copied completely and running alongside the server operating system- allowing the use of the newer more reliable hardware without losing any information on the legacy systems. On top of this, it allows for much easier backup solutions as everything is on a single server.
If you as a Linux admin "What is root", you may very well get the response "root, god, what's the difference?" Essentially root is THE admin, but in a Linux environment it is important to remember that unlike in a Windows environment, you spend very little time in a "privileged" mode. Many Windows programs over the years have required that the user be a local admin in order to function properly and have caused huge security issues as a result. This has changed some over the years, but it can still be difficult to remove all of the programs asking for top level permissions. A Linux user remains as a standard user nearly all the time, and only when necessary do they change their permissions to that of root or the superuser (su). sudo (literally- superuser do …) is the main way used to run one-off commands as root, or it is also possible to temporarily have a root-level bash prompt. UAC (User Account Control) is similar in theme to sudo, and like Windows Firewall can be a pain in the neck but it does do a lot of good. Both programs allow the user to engage higher-level permissions without having to log out of their current user session- a massive time saver.
Services are programs that run in the background based on a particular system status such as startup. Services exist across nearly all modern operating systems, although vary in their naming conventions depending on the OS- for example, services are referred to as daemons in Unix/Linux-type operating systems. Services also have the ability to set up actions to be done if the program stops or is closed down. In this way, they can be configured to remain running at all times.
☛ What is the maximum Tx configuration for GSM? How it can be managed?
☛ What is the functionality of search window in CDMA? What is the difference between Ec/Io and Eb/No?
☛ What is Blackberry?
☛ Why Rx power in microwave not considered less then -30dn?
☛ What is FCC and how does it relate to Bluetooth?
☛ How does Bluetooth use frequency hopping for security?
☛ What is a Bluetooth dongle?
☛ Which Bluetooth version uses adaptive frequency hopping?
☛ Which company originally conceived and developed Bluetooth?
☛ What is the total number of masters and slaves in a piconet?
☛ What is the frequency range used for Bluetooth in Europe and United States?
☛ Why is walse code used?
☛ Which technology is used in Bluetooth for avoiding interference?
☛ What is the difference between Internet and ISDN?
☛ What is the frequency range used for Bluetooth in Japan?
☛ How many SCO links are there in a piconet?
☛ What is FEC in Bluetooth?
☛ What is the main difference between GSM & CDMA? Which is the digital modulation used today in telecommunication? (Whatever his question meant the answer was CDMA)?
☛ How do you link a T1 from the 1st BTS to 2nd BTS 4?
☛ Why can Bluetooth equipment integrate easily in TCP or IP network?
☛ What is the difference between Internet and ISDN? Is both are same or is there any specific difference?
☛ What is the different between CDU C and CDU A?
☛ What is the difference between Diplexer and Duplexer and what position?
☛ Explain LTE and GSM internetworking.
☛ What is ISUP protocol?
☛ Why CPG message is required in ISUP protocol?
☛ If RF power is good then what is the best Rx and Tx power?
☛ What do you mean by TSCM?
☛ Which frequency is used in voice sampling?
☛ What is SS-7 signaling system?
☛ Where memory is allocated for variables in a program?
☛ What are various statuses of kernel?
☛ What is the maximum decimal place which can be accommodated in a byte?
☛ How personal computer can act as terminal?
☛ How connection is established in Datagram?
☛ What is the time for 1 satellite hop in voice communication?
☛ What is the maximum number of satellite hops allowed in voice communication?
☛ How many channels a 2MB PCM (pulse code modulation) has?
☛ What action is taken when the processor under execution is interrupted by a non-mask able interrupt?
☛ How much voltage is required in subscriber loop connected to local exchange?
☛ How many T1 facilities the company needs between its office and the PSTN if it has 47 digital telephones, each operating at 64kbps?
☛ What is the type of signaling used between two exchanges?
☛ Where conditional results after execution of an instruction in a micro processor are stored?
☛ What is line of sight?
☛ Why can I get the 512k service but not the 1Mb or 8Mb Broadband service?
☛ What is Buffering?
☛ What is a matrix?
☛ What equipment do I need in order to be able to access Broadband?
☛ What is a Broadband modem?
☛ How can I connect several computers to the Internet with Global Telecom Broadband?
☛ What is Broadband?
☛ What is the procedure if I want to upgrade my Broadband account to a faster speed?
☛ Who can I contact if I continue having problems with my Broadband service?
☛ What are the terms and conditions of using Global Telecom Broadband?
☛ Explain how the signal is amplified in fiber optic cable?
☛ What is BTS?
☛ What are its different configurations of BTS and what is the power consumption/peak current for each of these types of BTS?
☛ Write very briefly the underlining functional concept of GSM and CDMA?
☛ What is Bridging?
☛ Difference between Router and Switch.
☛ What are the different Types of polling in RLC A.M mode?
☛ What information is passed between cell FACH and cell DCH states?
☛ Why the main function of BTS is to air interface signaling?
☛ What is TTCN-3?
☛ What is the difference between Rx Lev Sub and Rx Lev Full? What you mean by Link Budget?
☛ Explain different types of digital modulation techniques.
For the IP address that most people are familiar with (IPv4), there are 4 sets (octets) of numbers, each with values of up to 255. You likely have run into this when troubleshooting a router or a DHCP server, when they are giving out addresses in a particular range- usually 192.x or 10.x in the case of a home or commercial network. IP classes are primarily differentiated by the number of potential hosts they can support on a single network. The more networks supported on a given IP class, the fewer addresses are available for each network. Class A networks run up to 127.x.x.x (with the exception of 127.0.0.1, which is reserved for loopback or localhost connections). These networks are usually reserved for the very largest of customers, or some of the original members of the Internet and xkcd has an excellent map (albeit a bit dated) located here showing who officially owns what. Class B (128.x to 191.x) and Class C (192.x to 223.x) networks are much more fuzzy at the top level about who officially owns them. Class C addresses are primarily reserved for in-house networks which is as we mentioned above why so many different manufacturers use 192.x as their default setting. Class D and E are reserved for special uses and normally are not required knowledge.
Transport layer assigns a unique set of numbers for each connection. These numbers are called port or socket numbers TCP, and UDP, provide a multiplexing function for a device: This allows multiple applications to simultaneously send and receive data.
A straight-through cable is used for DTE-to-DCE connections.
☛ A hub to a router, PC, or file server
☛ A switch to a router, PC, or file server
Crossover cables should by used when you connect a DTE to another DTE or a DCE to another DCE.
★ A hub to another hub
★ A switch to another switch
★ A hub to a switch
★ A PC, router, or file server to another PC, router, or file server
Subnets are used in IP network to break up larger network into smaller network. It is used to optimize the performance of network because it reduces traffic by breaking the larger network into smaller networks. It is also used to identify and isolate network's problem and simplify them.
When you're working in Active Directory, you see a tree-type structure going down through various organizational units (OU's). The easiest way to explain this is to run through a hypothetical example.
Say that we had a location reporting for CNN that dealt with nothing but the Detroit Lions. So we would setup a location with a single domain, and computers for each of our users. This would mean starting at the bottom: OU's containing the users, groups and computers are at the lowest level of this structure. A Domain is a collection of these OU's as well as the policies and other rules governing them. So we could call this domain 'CNNDetroitLions". A single domain can cover a wide area and include multiple physical sites, but sometimes you need to go bigger.
A tree is a collection of domains bundled together by a common domain trunk, rules, and structure. If CNN decided to combine all of its football team sites together in a common group, so that its football sports reporters could go from one location to the next without a lot of problems, then that would be a tree. So then our domain could be joined up into a tree called 'football', and then the domain would be 'CNNDetroitLions.football' while another site could be called 'CNNChicagoBears.football'.
Sometimes you still need to go bigger still, where a collection of trees is bundled together into a Forest. Say CNN saw that this was working great and wanted to bring together all of its reporters under a single unit- any reporter could login to any CNN controlled site and call this Forest 'cnn.com' So then our domain would become 'CNNDetroitLions.football.cnn.com' with another member of this same Forest could be called 'CNNNewYorkYankees.baseball.cnn.com', while yet another member could be 'CNNLasVegas.poker.cnn.com'. Typically the larger an organization, the more complicated it becomes to administer, and when you get to something as large as this it becomes exponentially more difficult to police.
When trying to communicate with systems on the inside of a secured network, it can be very difficult to do so from the outside- and with good reason. Therefore, the use of a port forwarding table within the router itself or other connection management device, can allow for specific traffic to be automatically forwarded on to a particular destination. For example, if you had a web server running on your network and you wanted access to be granted to it from the outside, you would setup port forwarding to port 80 on the server in question. This would mean that anyone putting in your IP address in a web browser would be connected up to the server's website immediately. Please note, this is usually not recommended to allow access to a server from the outside directly into your network.
ipconfig is one of the primary network connection troubleshooting and information tools available for Windows Operating Systems. It allows the user to see what the current information is, force a release of those settings if set by DHCP, force a new request for a DHCP lease, and clear out the local DNS cache among other functions it is able to handle. ifconfig is a similar utility for Unix/Linux systems that while at first glance seems to be identical, it actually isn't. It does allow for very quick (and thorough) access to network connection information, it does not allow for the DHCP functions that ipconfig does. These functions in fact are handled by a separate service/daemon called dhcpd.
In plain English, DNS is the Internet's phone book. The Domain Name System is what makes it possible to only have to remember something like "cnn.com" instead of (at this particular moment) "184.108.40.206". IP address change all the time however, although less so for mega-level servers. Human friendly names allow users to remember a something much easier and less likely to change frequently, and DNS makes it possible to map to those new addresses under the hood. If you were to look in a standard phone book and you know the name of the person or business you're looking for, it will then show you the number for that person. DNS servers do exactly the same thing but with updates on a daily or hourly basis. The tiered nature of DNS also makes it possible to have repeat queries responded to very quickly, although it may take a few moments to discover where a brand new address is that you haven't been to before. From your home, say that you wanted to go to the InfoSec Institute's home page. You know the address for it, so you punch it in and wait. Your computer will first talk to your local DNS server (likely your home router) to see if it knows where it is. If it doesn't know, it will talk to your ISP's DNS server and ask it if it knows. If the ISP doesn't know, it will keep going up the chain asking questions until it reaches one of the 13 Root DNS Servers. The responding DNS server will send the appropriate address back down the pipe, caching it in each location as it does so to make any repeat requests much faster.
Dynamic Host Configuration Protocol is the default way for connecting up to a network. The implementation varies across Operating Systems, but the simple explanation is that there is a server on the network that hands out IP addresses when requested. Upon connecting to a network, a DHCP request will be sent out from a new member system. The DHCP server will respond and issue an address lease for a varying amount of time. If the system connects to another network, it will be issued a new address by that server but if it re-connects to the original network before the lease is up- it will be re-issued that same address that it had before.
To illustrate this point, say you have your phone set to wifi at your home. It will pick up a DHCP address from your router, before you head to work and connect to your corporate network. It will be issued a new address by your DHCP server before you go to starbucks for your mid-morning coffee where you'll get another address there, then at the local restaurant where you get lunch, then at the grocery store, and so on and so on.
UTP cable comes in a variety of different grades, called "categories" by the Electronics Industry Association (EIA) and the Telecommunications Industry Association (TIA), the combination being referred to as EIA/TIA.
☛ Cat 1 :- Used for voice-grade telephone networks only; not for data transmissions
☛ Cat 2 :- Used for voice-grade telephone networks
☛ Cat 3 :-Used for voice-grade telephone networks, 10 Mbps Ethernet, 4 Mbps Token Ring,
☛ Cat 4 :-Used for 16 Mbps Token Ring networks
☛ Cat 5 :-Used for 100BaseTX Fast Ethernet, SONet, and OC-3 ATM
☛ Cat 5e:- Used for Gigabit (1000 Mbps) Ethernet protocols
☛ Switches are used at data link layer.
☛ Switches create separate collision domain and a single broadcast domain.
☛ Address learning
☛ Forward/filter decision using mac address.
☛ Hubs are used at physical layer.
☛ Hubs create single collision domain and a single broadcast domain.
☛ No addressing.
☛ No filtering.
Unix/Linux permissions operate on much simpler methodology than Windows does, but as a result when you're trying to figure out how they work it can feel like you've been hit by a slice of lemon wrapped around a large gold brick: It should be simple, but the way you're used to is incompatible with what you are trying to do so it makes your brain hurt. Linux permissions are normally visible using the following scale: d | rwx | rwx | rwx. This stretch of characters actually represents four distinct sections of binary switches- directory, owner, group, other. The first value (d)- asks 'is this a directory', while the next group (rwx) represents what permissions the owner of the file has- read (r), write (w), and execute (x). The next set of values (rwx), represent what members of the group can do for the same permissions- read, write and execute. The final set (rwx), say what everybody else can do for those same permissions. Fairly straightforward, but where do the 755 and 644 values come into play? These actually are the real-world simplified values the permission scale listed above. For example, when reading permissions with the value of drwxr-xr-x, it would mean that it is a directory, the owner has full permissions, and while everybody else can read and execute, nobody else can write to it. So if we were to look at this as a basic yes/no (1/0) system, we would see something like this:
rwx rwx rwx
111 101 101
So now we have binary values for each of these fields- 1 for yes, 0 for no. Now what do we do with them? We can actually calculate out the values based on what we see here, based on binary.
0000 = 0
0001 = 1
0010 = 2
0011 = 3
0100 = 4
0101 = 5
0110 = 6
0111 = 7
rwx rwx rwx
111 101 101
7 5 5
This would give us 755 as shorthand for owner read, write and execute, and everybody else is read and execute. Let's try this again with the 644 values. Let's work out the following string: rw-r-r-:
rwx rwx rwx
110 100 100
6 4 4
This would give us 644 as shorthand for owner read and write, with everybody else read-only.
External Media has been used for backups for a very long time, but has started to fall out of favor in the past few years due to its speed limitations. As capacities continue to climb higher and higher, the amount of time it takes to not only perform a backup but also a restore skyrockets. Tapes have been particularly hit hard in this regard, primarily because they were quite sluggish even before the jump to the terabyte era. Removable hard disks have been able to pick up on this trend however, as capacity and price have given them a solid lead in front of other options. But this takes us back to the question- why use EXTERNAL media? Internal media usually is able to connect faster, and is more reliable correct? Yes and no. While the estimated lifetime of storage devices has been steadily going up, there is always the chance for user error, data corruption, or hiccups on the hard disk. As a result, having regular backups to external media is still one of the best bang-for-buck methods available. Removable hard disks now have the capability to connect very rapidly, even without the use of a dedicated hot-swap drive bay. Through eSATA or USB3, these connections are nearly as fast as if they were plugged directly into the motherboard.
If you were to ask a Microsoft Sales Rep this question, they would no doubt have hundreds of tweaks and performance boosts from system to system. In reality however there are two main differences between the Windows Home edition and Windows Professional: Joining a domain and built-in encryption. Both features are active in Professional only, as joining a domain is nearly a mandatory requirement for businesses. EFS (Encrypted File System) in and its successor Bitlocker are both also only present in Pro. While there are workarounds for both of these items, they do present a nice quality-of-life boost as well as allow easier standardization across multiple systems. That being said, the jump from Windows Pro to Windows Server is a monumental paradigm shift. While we could go through all of the bells and whistles of what makes Windows Server…Windows Server, it can be summed up very briefly as this: Windows Home and Pro are designed to connect outwards by default and are optimized as such. Windows Server is designed to have other objects connect to it, and as a result it is optimized severely for this purpose. Windows Server 2012 has taken this to a new extreme with being able to perform an installation style very similar to that of a Unix/Linux system with no GUI whatsoever. As a result, they claim that the attack vector of the Operating System has been reduced massively (when installing it in that mode)
Even if you don't recognize anything else on this list, you like have heard of TCP/IP before. Contrary to popular believe, TCP/IP is not actually a protocol, but rather TCP is a member of the IP protocol suite. TCP stands for Transmission Control Protocol and is one of the big big mindbogglingly massively used protocols in use today. Almost every major protocol that we use on a daily basis- HTTP, FTP and SSH among a large list of others- utilizes TCP. The big benefit to TCP is that it has to establish the connection on both ends before any data begins to flow. It is also able to sync up this data flow so that if packets arrive out of order, the receiving system is able to figure out what the puzzle of packets is supposed to look like- that this packet goes before this one, this one goes here, this one doesn't belong at all and looks sort of like a fish, etc. Because the list of ports for TCP is so massive, charts are commonplace to show what uses what, and Wikipedia's which can be found here is excellent for a desk reference.
FTP or File Transfer Protocol, is one of the big legacy protocols that probably should be retired. FTP is primarily designed for large file transfers, with the capability of resuming downloads if they are interrupted. Access to an FTP server can be accomplished using two different techniques: Anonymous access and Standard Login. Both of these are basically the same, except Anonymous access does not require an active user login while a Standard Login does. Here's where the big problem with FTP lies however- the credentials of the user are transmitted in cleartext which means that anybody listening on the wire could sniff the credentials extremely easily. Two competing implementations of FTP that take care this issue are SFTP (FTP over SSH) and FTPS (FTP with SSL). FTP uses TCP ports 20 and 21.
To make a baseband network practical for many computers to share, the data transmitted by each system is broken up into separate units called packets. When your computer transmits data it might be broken up into many packets, and the computer transmits each packet separately. When all of the packets constituting a particular transmission reach their destination, the receiving computer reassembles them back into original data. This is the basis for a packet-switching network.
Circuit-switching means that the two systems wanting to communicate establish a circuit before they transmit any information. That circuit remains open throughout the life of the exchange, and is only broken when the two systems are finished communicating. Circuit switching is more common in environments like the public switched telephone network (PSTN), in which the connection between your telephone and that of the person you're calling remains open for the entire duration of the call.
Communication is a process of sending and receiving data by an externally connected data cable whereas transmission is a process of sending data from source to destination.
For general VoIP set up we require the following things:
☛ Broadband connection
☛ VoIP phone
☛ Nexton soft-switches
☛ Astric server
As you can see from the demonstration up above, if you try to work out permissions for every single person in your organization individually you can give yourself a migraine pretty quickly. Therefore, trying to simplify permissions but keep them strong is critical to administering a large network. Groups allow users to be pooled by their need to know and need to access particular information. In this way, it allows the administrator to set the permissions once- for the group- then add users to that group. When modifications to permissions need to be made, its one change that affects all members of that group.
A print server can refer to two different options- an actual server that shares out many different printers from a central administration point, or a small dedicated box that allows a legacy printer to connect to a network jack. A network attached printer on the other hand has a network card built into it, and thus has no need for the latter option. It can still benefit from the former however, as network attached printers are extremely useful in a corporate environment since they do not require the printer to be connected directly to a single user's system.
Giving a user as few privileges as possible tends to cause some aggravation by the user, but by the same token it also removes a lot of easily preventable infection vectors. Still, sometimes users need to have local admin rights in order to troubleshoot issues- especially if they're on the road with a laptop. Therefore, creating a local admin account may sometimes be the most effective way to keep these privileges separate.
Tracert or traceroute depending on the operating system allows you to see exactly what routers you touch as you move along the chain of connections to your final destination. If you end up with a problem where you can't connect or can't ping your final destination, a tracert can help in that regard as you can tell exactly where the chain of connections stop. With this information, you can contact the correct people- whether it be your own firewall, your ISP, your destination's ISP or somewhere in the middle. Tracert, like ping, uses the ICMP protocol but also has the ability to use the first step of the TCP three-way handshake to send out SYN requests for a response.
If you did any multiplayer PC gaming in the 90s and early 2000s, you likely knew of the IPX protocol as 'the one that actually works'. IPX or Internetwork Packet Exchange was an extremely lightweight protocol, which as a result for the limits of computers of the age was a very good thing. A competitor to TCP/IP, it functions very well in small networks and didn't require elements like DHCP and required little to no configuration, but does not scale well for applications like the Internet. As a result, it fell by the wayside and is now not a required protocol for most elements.
HTTP or HyperText Transfer Protocol, is the main protocol responsible for shiny content on the Web. Most webpages still use this protocol to transmit their basic website content and allows for the display and navigation of 'hypertext' or links. While HTTP can use a number of different carrier protocols to go from system to system, the primary protocol and port used is TCP port 80.
A set of standards that define all operations within a network. There are various protocols that operate at various levels of the OSI network model such as transport protocols include TCP.
In half-duplex communication data travels in only one direction at a time.
In full-duplex mode two systems that can communicate in both directions simultaneously are operating.
RIP depends on number of hops to determine the best route to the network while, IGRP considers many factors before decides the best route to take i.e. bandwidth, reliability, MTU and hops count.
The process of routing is done by the devices known as Routers. Routers are the network layer devices.
Voice over Internet Protocol (VoIP) is the technology to send your voice (analog data) over the internet (digital data) to an end user. It enables users to use the Internet as the transmission medium for voice calls at a very low cost.
/etc/passwd is the primary file in Unix/Linux operating system that stores information about user accounts and can be read by all users. /etc/shadow many times is used by the operating system instead due to security concerns and increased hashing capabilities. /etc/shadow more often than not is highly restricted to privileged users.
Shadow copies are a versioning system in place on Windows operating systems. This allows for users to go back to a previously available version of a file without the need for restoring the file from a standard backup- although the specific features of shadow copies vary from version to version of the OS. While it is not necessary to use a backup function in conjunction with Shadow Copies, it is recommended due to the additional stability and reliability it provides. Please note- Shadow Copies are not Delta Files. Delta files allow for easy comparison between versions of files, while Shadow Copies store entire previous versions of the files.
Also known as the program that can give your admin nightmares, telnet is a very small and versatile utility that allows for connections on nearly any port. Telnet would allow the admin to connect into remote devices and administer them via a command prompt. In many cases this has been replaced by SSH, as telnet transmits its data in cleartext (like ftp). Telnet can and does however get used in cases where the user is trying to see if a program is listening on a particular port, but they want to keep a low profile or if the connection type pre-dates standard network connectivity methods.
An IDS is an Intrusion Detection System with two basic variations: Host Intrusion Detection Systems and Network Intrusion Detection Systems. An HIDS runs as a background utility in the same as an anti-virus program for instance, while a Network Intrusion Detection System sniffs packets as they go across the network looking for things that aren't quite ordinary. Both systems have two basic variants- signature based and anomaly based. Signature based is very much like an anti-virus system, looking for known values of known 'bad things' while anomaly looks more for network traffic that doesn't fit the usual pattern of the network. This requires a bit more time to get a good baseline, but in the long term can be better on the uptake for custom attacks.
A subnet mask tells the network how big it is. When an address is inside the mask, it will be handled internally as a part of the local network. When it is outside, it will be handled differently as it is not part of the local network. The proper use and calculation of a subnet mask can be a great benefit when designing a network as well as for gauging future growth.
Although you may never have heard of this program, but if you have ever dealt with Apple devices you've seen its effects. Bonjour is one of the programs that come bundled with nearly every piece of Apple software (most notably iTunes) that handles a lot of its automatic discovery techniques. Best described as a hybrid of IPX and DNS, Bonjour discovers broadcasting objects on the network by using mDNS (multicast DNS) with little to no configuration required. Many admins will deliberately disable this service in a corporate environment due to potential security issues, however in a home environment it can be left up to the user to decide if the risk is worth the convenience.
The twin to TCP is UDP- User Datagram Protocol. Where TCP has a lot of additional under-the-hood features to make sure that everybody stays on the same page, UDP can broadcast 'into the dark'- not really caring if somebody on the other end is listening (and thus is often called a 'connectionless' protocol). As a result, the extra heavy lifting that TCP needs to do in order to create and maintain its connection isn't required so UDP oftentimes has a faster transmission speed than TCP. An easy way to picture the differences between these two protocols is like this: TCP is like a CB radio, the person transmitting is always waiting for confirmation from the person on the other end that they received the message. UDP on the other hand is like a standard television broadcast signal. The transmitter doesn't know or care about the person on the other end, all it does care about is that its signal is going out correctly. UDP is used primarily for 'small' bursts of information such as DNS requests where speed matters above nearly everything else. The above listing for TCP also contains counterparts for UDP, so it can be used as a reference for both.
At first glance it may be difficult to judge the difference between a hub and a switch since both look roughly the same. They both have a large number of potential connections and are used for the same basic purpose- to create a network. However the biggest difference is not on the outside, but on the inside in the way that they handle connections. In the case of a hub, it broadcasts all data to every port. This can make for serious security and reliability concerns, as well as cause a number of collisions to occur on the network. Old style hubs and present-day wireless access points use this technique. Switches on the other hand create connections dynamically, so that usually only the requesting port can receive the information destined for it. An exception to this rule is that if the switch has its maintenance port turned on for an NIDS implementation, it may copy all data going across the switch to a particular port in order to scan it for problems. The easiest way to make sense of it all is by thinking about it in the case of old style phone connections. A hub would be a 'party line' where everybody is talking all at the same time. It is possible to transmit on such a system, but it can be very hectic and potentially release information to people that you don't want to have access to it. A switch on the other hand is like a phone operator- creating connections between ports on an as-needed basis.
☛ Defines the process for connecting two layers, promoting interoperability between vendors.
☛ Separates a complex function into simpler components.
☛ Allows vendors to compartmentalize their design efforts to fit a modular design, which eases implementations and simplifies troubleshooting
The progressive weakening of a signal as it travels over a cable or other medium. The longer the distance a signal travels, the weaker the signal gets, until it becomes unreadable by the receiving system